Prosecution Insights
Last updated: April 19, 2026
Application No. 18/157,280

SYSTEMS AND METHODS FOR MODELING DENTAL STRUCTURES

Final Rejection §102§103§112
Filed
Jan 20, 2023
Examiner
LUCCHESI, NICHOLAS D
Art Unit
3772
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Get-Grin Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
623 granted / 794 resolved
+8.5% vs TC avg
Moderate +9% lift
Without
With
+9.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
52 currently pending
Career history
846
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
32.9%
-7.1% vs TC avg
§102
28.4%
-11.6% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 794 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-9,12,14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 6, line 3, “the at least one element that has a changed position” has no prior antecedent basis. In claim 12, line 2, “the plurality of intraoral images” has no prior antecedent basis. In claim 14, line 5, “the dental structure of the subject” has no prior antecedent basis. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 6-10, 12-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al 2020/0000551. With regard to claim 1, Li et al discloses method for generating a three-dimensional (3D) model of a dental structure of a subject, the method comprising: (a) capturing image data associated with the dental structure of the subject using a camera of a mobile device (capturing a first 2D image of a patient's face, including the patient's teeth; capturing the first 2D image may include Instructing a mobile phone or a camera to image the patient's face (see paragraph 17); (b) constructing a first 3D model of the dental structure from the image data (capturing a first 2D image of a patient's face, including the patient's teeth, building a parametric 3D model of the patient's teeth based on the 2D image, developing a simulated outcome of a dental treatment of the patient's teeth by rendering the 3D model with the patient's teeth; rendering the 3D parametric model of the patient's teeth (see paragraph 37); (c) registering the first 3D model with an initial 3D surface model to determine a transformation for at least one element of the dental structure (silhouette of the 3D parametric model matches one or more edges of the 2D image a first time; parametric models of the teeth match the original (initial) 3D models; case-specific parameters are used as the generic parameters of mean tooth position and mean tooth shape; generating a parametric model of one or more of a patient's teeth and converting the parametric model of one or more teeth Into a 3D model of a dental arch; generating a 3D model of a tooth arch of a patient based on a parametric model of the patient's teeth (see paragraphs 18,19,92 and 93; and (d) generating an updated 3D surface model by updating the initial 3D surface model, wherein updating the initial 3D surface model comprises at least one of (i) applying the transformation to update a position of the at least one element (alignment position Is performed based on the updated parametric model; the patient's parameterized arch model may be modified with the tooth locations and orientations in the matched record or the match record may be updated with the shape of the patient's teeth; each tooth's relative shape, location, and rotation (transformation) are determined in order to build the distribution for each case specific parameter; the surface model of the each respective tooth in all of the retrieved models Is determined (see paragraphs 121,155,189) and (ii) deforming a surface of a local area of the at least one element using a deformation algorithm (building a parametric 3D model of the patient's teeth based on the first 2D image, using one or more case-specific parameters for the one or more shapes associated with the at least one of the patient's teeth; an initial determination of the patient's gingiva (element) contours may be identified by a machine learning algorithm (see paragraphs 7 and 149). With regard to claim 2, note that Li et al discloses wherein the first 3D model comprises a first 3D point cloud reconstructed from the image data (see paragraph 79 which discloses a sphere 210 having a plurality of vertices in fixed or known locations and orientations; a tooth may be placed or otherwise modeled at the center of the sphere; the center of volume of the tooth 220, the scanned portion of the tooth, or the crown of the tooth, may be aligned with the center of the sphere; then each vertex 230a, 230b of the sphere 210 may be mapped to a location on the surface of the tooth model; the mapping may be represented by an n*3 matrix, where n represents the number of points on the sphere, and then for each point, the x, y, and z location is recorded; showing an example of how wall parametric models of the teeth match the original 3D model. With regard to claim 6, note that Li et al discloses wherein the registering the first 3D model with an initial 3D surface model to determine transformation for at least one element of the dental structure further comprises (see paragraph 79) generating a second 3D point cloud for the initial 3D surface model and wherein the first 3D model is registered with the second 3D point cloud to identify the at least one element that has a changed position (to generate a parametric 3D model of a patient's tooth from a 3D tooth model derived from an image of a patient's tooth, the tooth may be modeled based on displacement (changed position) of the scanned tooth surface from a fixed shape, such as a fixed sphere; a sphere having a plurality of vertices (point cloud) in fixed or known locations and orientations is shown; each vertex 230a, 230b of the sphere 210 may be mapped to a location on the surface of the tooth model; the matrix stores the difference between a location on a mean tooth with the corresponding position on the model of the actual; showing an example of how well parametric models of the teeth match the original 3D models. With regard to claim 7, note that Li et al discloses wherein the second 3D point cloud is generated by sampling the surface of the initial 3D surface model (to generate a parametric 3D model of a patient's tooth from a 3D tooth model derived from an image of a patient's tooth, the tooth may be modeled based on displacement of the scanned tooth surface from a fixed shape; a sphere having a plurality of vertices (point cloud) in fixed or known locations and orientations Is shown; each vertex 230 a, 230 b of the sphere 210 may be mapped to a location on the surface of the tooth model; the matrix stores the difference between a location on a mean tooth with the corresponding position on the model of the actual; showing an example of how well parametric models of the teeth match the original 3D models). See paragraph 79. With regard to claim 8, note that Li et al discloses wherein the transformation for the at least one element (rigid transformation of the teeth in the 3D model, see paragraph 19) is determined by: (i) selecting a first local point cloud for the at least one element from the first 3D model (the tooth may be modeled based on displacement of the scanned tooth surface from a fixed shape; a sphere having a plurality of vertices (point cloud) in fixed or known locations and orientations is shown; each vertex 230a, 230b of the sphere 210 may be mapped to a location on the surface of the tooth model; the matrix stores the difference between a location on a mean tooth with the corresponding position on the model of the actual; showing an example of how well parametric models of the teeth match the original 3D models, see paragraph 79), (ii) sampling the at least one element from the initial 3D surface modal to generate a second local point cloud, and (iii) registering the first local point cloud with the second local point cloud (the tooth may be modeled based on displacement of the scanned tooth surface from a fixed shape; a sphere having a plurality of vertices (point cloud) in fixed or known locations and orientations is shown; each vertex 230a, 230b of the sphere 210 may be mapped to a location on the surface of the tooth model; the matrix stores the difference between a location on a mean tooth with the corresponding position on the modal of the actual; showing an example of how well parametric models of the teeth match the original 3D models, see paragraph 79). With regard to claim 9, note that Li et al discloses wherein sampling the at least one element from the initial 3D surface model is based on a semantic segmentation of the at least one element (see paragraph 210 which discloses having both an initial position and a target position for each tooth, a movement path can be defined for the motion of each tooth; the tooth paths can optionally be segmented, and the segments can be calculated so that each tooth's motion within a segment stays within threshold limits of linear and rotational translation). With regard to claim 10, note that Li et al discloses wherein the transformation comprises a rotational movement or a translational movement (each tooth's motion (movement) within a segment stays within threshold limits of linear and rotational translation, see paragraph 210). With regard to claim 12, note that Li et al discloses determining a dental condition of the subject based at least in part on the plurality of intraoral images (see paragraphs 70 and 71 which discloses obtaining a two-dimensional (2D) representation (such as an image) of a patient's dentition, obtaining one or more parameters to represent attributes of the patient's dentition in the 2D representation; estimate of a state of a patient's dentition after correction of any malpositioned teeth/jaws, malocclusion, etc., the patient suffers from). With regard to claim 13, note that Li et al discloses wherein the image data comprises a sequence of 2D images and the first 3D point cloud is reconstructed using a curve-based reconstruction algorithm (see paragraphs 30 and 103 which disclose capturing a 2D Image of a patient’s face, Including their teeth, determining edges of teeth and gingiva within the first 2D image, fitting the teeth in a 3D parametric model of teeth to the edges of the teeth and gingiva within the first 2D image, the 3D parametric model including case specific parameters for the shape of the patient's teeth, determining the value of the case-specific parameters of the 3D parametric model based on the fitting; an initial determination of the patient's lips, teeth, and gingiva contours (curve) may be identified by a machine learning algorithm. With regard to claim 14 note that Li et al discloses a non-transitory computer-readable medium comprising machine-executable instructions that, upon execution by one or more computer processors (see paragraphs 42 and 175, a non-transitory computer readable medium includes instructions that when executed by a processor implements a method for delivering context based information to a mobile device in real time cause the processor to perform any of the methods; dental treatment planning system may be implemented on a personal computing device such as a mobile phone); the method comprising: (a) capturing image data associated with the dental structure of the subject using a camera of a mobile device (see paragraph 17); (b) constructing a first 3D model of the dental structure from the image data (see paragraph 37, which discloses capturing a first 2D image of a patient's face, including the patient's teeth, building a parametric 3D model of the patient's teeth based on the 2D image, developing a simulated outcome of a dental treatment of the patient's teeth by rendering the 3D model with the patient's teeth; rendering the 3D parametric model of the patient's teeth); (c) registering the first 3D model with an initial 3D surface model to determine a transformation for at least one element of the dental structure (silhouette of the 3D parametric model matches one or more edges of the 2D image a first time; parametric models of the teeth match the original (initial) 3D models; case-specific parameters is used as the generic parameters of mean tooth position and mean tooth shape; generating a parametric model of one or more of a patient's testh and converting the parametric model of one or more teeth Into a 3D model of a dental arch; generating a 3D model of a tooth arch of a patient based on a parametric model of the patient's teeth; see paragraphs 18,79,82,93); and (d) generating an updated 3D surface model by updating the initial 3D surface model, wherein updating the initial 3D surface model comprises at least one of (i) applying the transformation to update a position of the at least one element (alignment procession Is performed based on the updated parametric model; the patient's parameterized arch model may be modified with the tooth locations and orientations in the matched record or the match record may be updated with the shape of the patient's teeth; each tooth's relative shape, location, and rotation (transformation) are determined in order to build the distribution for each case specific parameter; the surface model of the each respective tooth In all of the retrieved models is determined, see paragraphs 121,155,189); and (ii) deforming a surface of a local area of the at least one element using a deformation algorithm (see paragraphs 7 and 149 which discloses building a parametric 3D model of the patient's teeth based on the first 2D image, using one or more case-specific parameters for the one or more shapes associated with the at least one of the patient's teeth; an Initial determination of the patient's gingiva (element) contours may be identified by a machine learning algorithm). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al 20200000551 in view of Miller et al 20140272764. With regard to claim 3, Li et al does not disclose wherein the image data comprises a sequence of 2D images and the first 3D point cloud is reconstructed by applying a pipeline of structure from motion (SfM) and muiti view stereo (MVS) algorithm to the image data. Miller et al discloses a method wherein the image data comprises a sequence of 2D images and the first 30 point cloud is reconstructed by applying a pipeline of structure from motion (SfM) and multi view stereo (MVS) algorithm to the image data (see paragraph 95 of Miller et al which discloses that a standard 2D intraoral image can optionally be exacted from the raw video data stream using a 2D to spatial 3D algorithm to transform the formatted 2D file into spatial 3D; by introducing a binocular disparity depth cue, 2D to stereo 3D conversion and stereo conversion, transforms 2D flat film to 3D stereo form). It would have been obvious to one of ordinary skill in the art to modify the method of Li et al to include wherein the image data comprises a sequence of 2D images and the first 3D point cloud is reconstructed by applying a pipeline of structure from motion (SfM) and multi view stereo (MVS) algorithm to the image data, as taught by Miller et al, if one wished to provide a spatial 3D stereoscopic video generated with a dual camera subassembly in a hand held device for simultaneous visualization comparison with intraoral cavity digital dental impression 3D CAD formatted files. With regard to claim 4, Li et al does not disclose wherein the first 3D point cloud is reconstructed by determining one or more camera parameters using a trained model and applying a multi view stereo (MVS) algorithm to the image data using the one or more camera parameters. Miller et al discloses a method wherein the image data comprises a sequence of 2D images and the first 3D point cloud is reconstructed by applying a pipeline of structure from motion (SfM) and multi view stereo (MVS) algorithm to the image data (see paragraph 95 which discloses how a standard 2D intraoral image can optionally be exacted from the raw video data stream using a 2D to spatial 3D algorithm to transform the formatted 2D file into spatial 3D; by introducing a binocular disparity depth cue, 2D to stereo 3D conversion and stereo conversion, transforms 2D “flat” film to 3D stereo form). It would have been obvious to one of ordinary skill in the art at the time the invention was made to include in the method of Li et al, the first 3D point cloud being reconstructed by determining one or more camera parameters using a trained model and applying a multi view stereo (MVS) algorithm to the image data using the one or more camera parameters, as taught by Miller et al, if one wished to provide a spatial 3D stereoscopic video generated with a dual camera subassembly in a hand held device for simultaneous visualization comparison with intraoral cavity digital dental impression 3D CAD formatted files. With regard to claim 5, Li et al does not disclose wherein the image data comprises depth data and the first 3D model is reconstructed based on the depth data. Miller et al discloses a method wherein the image data comprises depth data and the first 3D model is reconstructed based on the depth data (see paragraph 53, which discloses how intraoral spatial 3D camera device heads include an optics-based Infinite depth of field of the 3D spatial visualization of the oral cavity). It would have been obvious to one of ordinary skill in the art to include wherein the image data comprises depth data and the first 3D model is reconstructed based on the depth data, as taught by Miller et al, in the method of Li et al if one wished to provide a camera subsystem to illuminate some of the oral cavity that would be otherwise obscured by lesser visibility. Claims 11 and 15 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al 20200000551 in view of O’Neill et al 20160373155. With regard to claim 11, Li et al discloses wherein the image data comprises intraoral image data (see paragraph 17, but does not disclose coupling an intraoral adapter to the mobile device to facilitate imaging of an intraoral region of the subject's mouth through a viewing channel of the intraoral adapter. O’Neill et al disclose a method of collecting intraoral image data comprising coupling an adapter to the mobile device to facilitate imaging of the subject's mouth through a viewing channel (see paragraphs 11 and 23 which disclose attaching a handle via accessory mounting component 308 It would have been obvious to one of ordinary skill In the art at the time the invention was made to include the step of coupling an intraoral adapter to the mobile device of Li et al to facilitate imaging of an intraoral region of the subject's mouth through a viewing channel of the intraoral adapter, as taught by Oneill et al, with the method of Li et al if one wished to include a handle in order to allow users a more comfortable and/or secure grip on the mobile device during use, such as when taking photographs. With regard to claim 15, note that Li et al discloses a method for generating a three-dimensional (3D) model of a dental structure of a subject comprising: (a) capturing image data associated with the dental structure of the subject using a camera of a mobile device (capturing the first 2D image may include instructing a mobile phone or a camera to image the patient's face; para [0017]); (b) processing the Image data using an image processing algorithm (see paragraph 112, image processing filters, pix2pix transformation technologies used for application to the surfaces of the 3D model, wherein the image processing algorithm is configured to implement differentiable rendering and (c) using the processed image data to generate a 3D surface model corresponding to one or more dental features represented in the image data (building a parametric 3D model of the patient's teeth based on the 2D image, see paragraph 21). Li et al do not disclose wherein the method further comprises providing visual, audio, or haptic guidance to aid in the capture of the image data, and wherein the guidance corresponds to a position, an orientation, or a movement of the mobile device relative to the dental structure of the subject. O’Neill et al disclose a method comprising providing visual, audio, or haptic guidance to aid in the capture of the image data (see paragraph 29 which discloses that accessories may include illumination components (e.g., flashes for flash photography), recording components (e.g., microphones), support components (e.g., tripods, handles), other accessories, accessory mounting components, accessory mounting component may facilitate attachment of a microphone, etc; guide the auxiliary optical component onto the mobile device in a desired orientation and/or alignment with respect to a component of the mobile device). It would have been obvious to one of ordinary skill in the art to further include visual, audio, or haptic guidance to aid in the capture of the image data as taught by O’Neill et al , with the method of Li et al if one wished to facilitate ease of use of the mobile device of Li et al and provide guidance of positioning the mobile device while in use. With regard to claim 16, note that Li et al discloses wherein processing the image data comprises comparing the image data to one or more two-dimensional (2D) renderings of a three-dimensional (3D) mesh associated with the dental structure of the subject (see paragraphs 79 and 188 which discloses to generate a parametric 3D model of a patient's tooth from a 3D tooth model derived from an image of a patient's tooth, the tooth may be modeled based on displacement of the scanned tooth surface from a fixed shape or sphere; a parametric tooth model, a sphere having a plurality of vertices (mesh)in fixed or known locations and orientations; tooth shape, may be defined as an 2500*3 matrix, where each vertex of the sphere is mapped to a location on the surface of the tooth model, each location of a vertex being x, y, and z special locations). With regard to claim 17, note that Li et al discloses applying one or more rigid transformations to align or match at least a portion of the image data to the one or more 2D renderings of the 3D mesh associated with the dental structure of the subject (see paragraphs 18 and 79 which discloses how a silhouette of the 3D parametric model matches one or more edges of the 2D image a first time; parametric models of the teeth match the original (initial) 3D models; to generate a parametric 3D model of a patient's tooth from a 3D tooth model derived from an image of a patient's tooth, the tooth may be modeled based on displacement of the scanned tooth surface from a fixed shape or sphere; a parametric tooth model, a sphere having a plurality of vertices (mesh) in fixed or known locations and orientations). With regard to claim 18, note that Li et al discloses wherein the one or more rigid transformations comprise a six degree of freedom rigid transformation (see paragraphs 32 and 40 which discloses linearizing the rigid transformation of the teeth in the model; move one or more teeth along a tooth movement vector comprising six degrees of freedom, in which three degrees of freedom are rotational and three degrees of freedom are translation). With regard to claim 20, note that Li et al discloses the step of determining a movement of one or more dental features based on the comparison between the image data and the one or more 2D renderings of the 3D mesh associated with the dental structure of the subject (see paragraph 79 which discloses a sphere having a plurality of vertices in fixed or known locations and orientations Is shown; a tooth 220 may be placed or otherwise modeled at the center of the sphere; the center of volume of the tooth, the scanned portion of the tooth 220, or the crown of the tooth 220, may be aligned with the center of the sphere; then each vertex 230a, 230b of the sphere 210 may be mapped to a location on the surface of the tooth model; the matrix stores the difference between a location on a mean tooth with the corresponding position on the model of the actual). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al 20200000551 in view of O’Neill et al 20160373155 as applied to claim 17 above, and further in view of Hillen 20190313963. With regard to claim 19 Li et al/O’Neill et al does not disclose the step of evaluating or quantifying a level of matching using an intersection-over-union metric. Hillen discloses a method of acquiring dental images, comprising evaluating or quantifying a level of matching using an intersection-over-union metric (see paragraph 30 which discloses identifying detected features in the dental images; annotations overlap with each other for at least 1 pixel or have a certain value of intersection over union). It would have been obvious to one of ordinary skill in the art to include evaluating or quantifying a level of matching using an intersection-over-union metric with the method of as Li et al/O’Neill et al, as taught by Hillen, if one wished to provide for higher quality annotations and images. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS D LUCCHESI whose telephone number is (571)272-4977. The examiner can normally be reached M-F 800-430. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jacqueline Johanas can be reached on 571-270-5085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS D LUCCHESI/Primary Examiner, Art Unit 3772
Read full office action

Prosecution Timeline

Jan 20, 2023
Application Filed
Mar 17, 2025
Non-Final Rejection — §102, §103, §112
Sep 19, 2025
Response Filed
Dec 18, 2025
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599467
MANDIBULAR OPENING AND ADVANCEMENT MEASUREMENT AND POSITIONING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12599461
IMPRESSION TRAY
2y 5m to grant Granted Apr 14, 2026
Patent 12594103
MODULAR BONE SCREW FOR SURGICAL FIXATION TO BONE
2y 5m to grant Granted Apr 07, 2026
Patent 12594152
DENTAL FLOSSING DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12588970
SURGICAL GUIDE WITH MATING CONNECTORS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
88%
With Interview (+9.1%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 794 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month