Prosecution Insights
Last updated: April 19, 2026
Application No. 18/814,124

SYSTEM AND METHOD FOR PROVIDING PERSONALIZED TRANSACTIONS BASED ON 3D REPRESENTATIONS OF USER PHYSICAL CHARACTERISTICS

Non-Final OA §103§DP
Filed
Aug 23, 2024
Examiner
CRADDOCK, ROBERT J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Applications Mobiles Overview Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
519 granted / 616 resolved
+22.3% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
27 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 616 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-40 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 2, 3, 4, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16 of U.S. Patent No. 11,694,395. Although the claims at issue are not identical, they are not patentably distinct from each other because they’re broader in every way. Instant Application Patent Number 11,694,395 21. A method for determining user-item fit characteristics of an item for a user body part, the method comprising 1. A computer-implemented method for determining user-item fit characteristics of an item for a user body part, the method comprising: accessing a three-dimensional (3D) reconstructed model of the user body part; accessing a three-dimensional (3D) reconstructed model of the user body part; accessing information about one or more 3D reference models of the item; accessing information about one or more 3D reference models of the item, the information for each 3D reference model including respective dimensional measurement, spatial, and geometrical attributes; selecting a 3D reference model from the one or more 3D reference models by performing a 3D matching process based on the 3D reconstructed model and the information about the one or more 3D reference models; performing a 3D matching process based on the 3D reconstructed model and the accessed information of the one or more 3D reference models to determine a best-fitting 3D reference model from the one or more 3D reference models; generating, based on the 3D reference model and the 3D reconstructed model, a 3D fit representation; and integrating the best-fitting 3D reference model with the 3D reconstructed model to provide a 3D best fit representation; and displaying the 3D fit representation along with visual indications of user-item fit characteristics. displaying the 3D best fit representation along with visual indications of user-item fit characteristics; Instant Application 21 22 23 24 25 26 27 28 29 30 Patent Number 11,694,395 1 2 6 1 1 3 3 4 8 6 Instant Application 31 32 33 34 35 36 37 38 39 40 Patent Number 11,694,395 7 8 9 10 12 13 14 15 16 1 and 14 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over Siddique et al. (US 20160210602 A1) Regarding claim 21, Siddique teaches a method for determining user-item fit characteristics of an item for a user body part, the method comprising (See title, ¶101, “The embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. However, preferably, these embodiments are implemented in computer programs […]” ¶113, “ The user is able to see how items of apparel appear on their respective model, and how such items fit. ”): accessing a three-dimensional (3D) reconstructed model of the user body part (¶111, “The server application 22 interacts with a data store 70 […]” ¶112, “The modeling module 50, is used to generate a three-dimensional model of a user. […] images are passed on to a reconstruction engine to generate a preliminary three-dimensional model […]” ¶140: "The data store 70 […] comprises […] a 3-D model database 84 […] The 3-D model database 86 stores predetermined 3-0 models and parts of various 3-D models that are representative of various body types. The 3-D models are used to specify the user model that is associated with the user. " ¶162, "[…]a user model can be created from apparel size data by (i) instantiating the corresponding average 3D model for the various body parts for which an apparel size is specified […]" ); accessing information about one or more 3D reference models of the item ( ¶140, "The data store 70 […] comprises […] an apparel database 82, […]" ¶229, ''All items of apparel are described that are associated with the system 10 have an apparel description file (ADF) associated with them. […] the ADF file can be in XML format and the CAD file provided to system 10 […] 30 display data for the apparel[. .. ] This ADF file information is then subsequently used in modeling the apparel digitally for purposes of display in electronic catalogues and displays 713; for fitting on 30 user models 714 […] a mesh is generated by tessellating 30 apparel pattern data into polygons. This geometric model captures the 30 geometry of the apparel and enables 3D visualization of apparel […]" ); selecting a 3D reference model from the one or more 3D reference models by performing a 3D matching process based on the 3D reconstructed model and the information about the one or more 3D reference models (See ¶ 133: "[…]the convex hull of the user model is used to determine apparel that would best fit/suit the user […]" ¶ 146: "Body measurements specified by a user are used by the system to estimate and suggest apparel size that best meets the user's fit needs […] the system suggests dress pants that would best fit the user […]" ¶ 232: "for determining apparel goodness of fit on a user model, the convex hull of the model is compared with the volume occupied by a given piece of clothing" claim 15: "c) recommending vendor products that best match the users […]’ personal data."); generating, based on the 3D reference model and the 3D reconstructed model, a 3D fit representation (See claim 18 of : ''A method as in claim 15 in which accurate 30 body models representing the user are generated, [. .. ] f) refining the 30 model using texture maps, pattern, color, shape and other information pertaining to the make and material of the apparel to provide photorealism;"); but doesn’t explicitly disclose displaying the 3D fit representation along with visual indications of user-item fit characteristics. Siddique teaches displaying the 3D fit representation along with visual indications of user-item fit characteristics (¶ 232, "regions of different fit on the apparel may be colored differently. Visual indicators include, but are not limited to, arrows on screen, varying colors, digital effects including transparency/x-ray vision effect where the apparel turns transparent and the user is able to examine fit in the particular region [. .. ] In FIG. 30, the apparel on the 3D body model is made transparent in order for the user to visually examine overall apparel fit information-regions of tight/proper/loose fit.,,). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Siddique in view of Siddique as it would have been obvious to try as there is a finite number of identified predictable solutions to try visual indications of a user-item fit characteristics as doing so would improve usability or clarity. Regarding claim 22, Siddique teaches the method of claim 21, wherein performing the 3D matching process comprises determining a best-fitting 3D reference model from the one or more 3D reference models (See ¶ 133: "[…]the convex hull of the user model is used to determine apparel that would best fit/suit the user […]" ¶ 146: "Body measurements specified by a user are used by the system to estimate and suggest apparel size that best meets the user's fit needs […] the system suggests dress pants that would best fit the user […]" ¶ 232: "for determining apparel goodness of fit on a user model, the convex hull of the model is compared with the volume occupied by a given piece of clothing" claim 15: "c) recommending vendor products that best match the users […]’ personal data."). Regarding claim 23, Siddique teaches the method of claim 21, wherein the information the information about the one or more 3D reference models of the item comprise dimensional measurements, spatial attributes, and geometrical attributes (¶140, "The data store 70 […] comprises […] an apparel database 82, […]" ¶229, ''All items of apparel are described that are associated with the system 10 have an apparel description file (ADF) associated with them. […] the ADF file can be in XML format and the CAD file provided to system 10 […] 30 display data for the apparel[. .. ] This ADF file information is then subsequently used in modeling the apparel digitally for purposes of display in electronic catalogues and displays 713; for fitting on 30 user models 714 […] a mesh is generated by tessellating 30 apparel pattern data into polygons. This geometric model captures the 30 geometry of the apparel and enables 3D visualization of apparel […]" Also see MPEP 2173.05(h)). Regarding claim 24, Siddique teaches the method of claim 21 wherein the visual indications of user-item fit characteristics represent voids and collisions between the 3D reference model and the 3D reconstructed model (¶232, "With reference to FIG. 30, regions of tight fit are shown using red coloured highlight regions (armpit region). Loose fitting regions are shown via green arrows (upper leg) and green highlight (hips). Comfort/smug fitting is depicted using orange arrows (waist) and yellow highlight (lower leg) […]" Claim 19: "e) Specifying numeric measurements to indicate fit information including the gap or margin between apparel and body in different regions, after apparel is worn; an overall goodness of fit rating."). Regarding claim 25, Siddique teaches the method of claim 24, wherein the voids have a corresponding volume above a first threshold, wherein the collisions have a corresponding volume above a second threshold, and wherein the first threshold or the second threshold is determined based on pre-defined target areas associated with the 3D reference model (See ¶232, "goodness of fit is a quantitative metric. […] for determining apparel goodness of fit on a user model, the convex hull of the model is compared with the volume occupied by a given piece of clothing". See ¶232, “Users may also define the numerical margins that they consider 'tight', loose' and so on for different apparel. For example, the user may consider a shirt to be proper fitting around the arms if the sleeves envelope the arm leaving between 1-2 cm margin."). Regarding claim 26, Siddique teaches the method of claim 21, wherein performing the 3D matching process comprises performing a geometrical matching process, wherein the geometrical matching process includes, for each of the one or more 3D reference models: aligning the respective 3D reference model with the 3D reconstructed model; and determining a distance between the respective 3D reference model and the 3D reconstructed model, and wherein, selecting the 3D reference model comprises selecting a 3D reference model having a lowest distance (¶149, “"landmarks, for example, the circumference of the head and neck, distance from trichion to tip of nose, distance from the tip of the nose to the mental protuberance, width of an eye, length of the region between the lateral clavicle region to anterior superior iliac spine, circumference of the thorax, waist, wrist circumference, thigh circumference, shin length, circumference of digits on right and left hands, thoracic muscle content, abdominal fat content, measurements of the pelvis, measurements of the feet[…] The availability of information on anatomical landmarks makes it possible to derive anatomically accurate models and communicate fit information to the user as described below […]” Also see above citations.). Regarding claim 27, Siddique teaches the method of claim 21, wherein the information about the one or more 3D reference models comprises landmarked indications of dimensional measurements, spatial attributes, and geometrical attributes, and wherein the 3D matching process comprises a landmark matching process that includes: generating the one or more 3D reference models based on the landmarked indications; aligning the one or more 3D reference models with the 3D reconstructed model; and determining a distance between each of the 3D reference models and the 3D reconstructed model (¶149, “"landmarks, for example, the circumference of the head and neck, distance from trichion to tip of nose, distance from the tip of the nose to the mental protuberance, width of an eye, length of the region between the lateral clavicle region to anterior superior iliac spine, circumference of the thorax, waist, wrist circumference, thigh circumference, shin length, circumference of digits on right and left hands, thoracic muscle content, abdominal fat content, measurements of the pelvis, measurements of the feet[…] The availability of information on anatomical landmarks makes it possible to derive anatomically accurate models and communicate fit information to the user as described below […]” Also see above citations.). Regarding claim 28, Siddique teaches the method of claim 21, further comprising: capturing, by an imaging device, a plurality of images of the user body part; and generating the 3D reconstructed model representative of the user body part based on the plurality of images (See claim 18, "combining of 20 user images and anthropometric data to construct a 30 body and face model of the user:" See ¶104, “The three-dimensional models are herein referred to as user models or character models, and are created based on information provided by the user. This information includes […]” images; movies; measurements; outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts: scans such as laser scans; […]” high resolution scans and images of the eyes; motion capture data (mocap)"). Regarding claim 29, Siddique teaches the method of claim 21, further comprising associating the 3D reconstructed model with a body part category based on instructions received from a user, and wherein accessing the information about one or more 3D reference models of the item is based on the instructions (¶230, “"Options to make body adjustments are displayed upon clicking the menu display icon 476. A sample mechanism is shown for making adjustments to the body. Slider controls 475 and 477 can be used to make skeleton and/or weight related adjustments to the user model.” See ¶173, "The measurements of various body parts can be updated at any time as the user ages, gains/loses weight, goes through maternity etc . "). Regarding claim 30, Siddique teaches the method of claim 21, further comprising executing an object recognition algorithm on the 3D reconstructed model to identify the user body part and determine dimensional measurements, spatial attributes, and geometrical attributes of the user body part (¶159, “the local feature analysis step 142 for the body analysis module 122 involves individually analyzing the upper limbs, the lower limbs, the thorax, the abdomen, and the pelvis”). Regarding claim 31, Siddique teaches the method of claim 30, wherein accessing the information about the one or more 3D reference models comprises selecting the one or more 3D reference models from a database of 3D reference models based on an output of the object recognition algorithm (See ¶159, “"the local feature analysis step 142 for the body analysis module 122 involves individually analyzing the upper limbs, the lower limbs, the thorax, the abdomen, and the pelvis" See above citations.). Regarding claim 32, Siddique teaches the method of claim 21, further comprising adjusting, based on instructions received from a user, a position of the 3D reference model relative to the 3D reconstructed model (¶225, "Users also have the ability to interact with the object 696".) Regarding claim 33, Siddique teaches the method of claim 32, wherein the position of the 3D reference model is adjustable among a plurality of pre-defined positions relative to the 3D reconstructed model (See ¶225 for Fig. 36. Displays 692 in Fig. 36 wherein jewllery is placed. They are in a predetermined position with respect to the model user’s hand (on the left center or the right). Regarding claim 34, Siddique teaches the method of claim 21, further comprising: identifying, based on instructions received from a user, a user-selected 3D reference model among the one or more 3D reference models; integrating the user-selected 3D reference model with the 3D reconstructed model to provide a 3D user-selected representation; and displaying the 3D user-selected representation along with visual indications of user- item fit characteristics corresponding to the 3D user-selected representation and the 3D reconstructed model (See ¶225 for Fig. 36. Displays 692 in Fig. 36 wherein jewllery is placed. They are in a predetermined position with respect to the model user’s hand (on the left center or the right). Also see previous citations.). Regarding claim 35, Siddique teaches the method of claim 21, wherein the information about the one or more 3D reference models comprises 3D scans, 3D point clouds, 3D meshes, voxels, continuous functions, Computer-aided design (CAD) files or a list of body part landmarks (See claim 18, "combining of 20 user images and anthropometric data to construct a 30 body and face model of the user:" See ¶104, “The three-dimensional models are herein referred to as user models or character models, and are created based on information provided by the user. This information includes […]” images; movies; measurements; outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts: scans such as laser scans; […]” high resolution scans and images of the eyes; motion capture data (mocap)"). Regarding claim 36, Siddique teaches the method of claim 35, wherein the information about the one or more 3D reference models further comprises one or more identifiers selected from a group of identifiers comprising: labels, semantic labels, object category, brand information and metadata (See ¶131, “The user may perform a search to retrieve apparel items based on criteria that may include, but are not limited to, a description of the apparel including size, price, brand, season, style, occasion, discounts, and retailer.”). Claim 37 recites similar limitations to that of claim 21 and thus is rejected under similar rationale, but claim 21 doesn’t explicitly disclose a system for determining user-item fit characteristics of an item for a user body part, the system comprising at least one processor and memory comprising executable instructions which, when executed by the at least one processor, cause the system to. Siddique teaches a system for determining user-item fit characteristics of an item for a user body part, the system comprising at least one processor and memory comprising executable instructions which, when executed by the at least one processor, cause the system to (See ¶101, “The embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. However, preferably, these embodiments are implemented in computer programs executing on programmable computers, each comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. For example, and without limitation, the programmable computer may be a mainframe computer, server, personal computer, laptop, personal data assistant, or cellular telephone. A program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.” ¶113,” “ The user is able to see how items of apparel appear on their respective model, and how such items fit. ”). Regarding claim 38, Siddique teaches the system of claim 37, wherein the system is in communication with a service provider device, and wherein the instructions that cause the at least one processor to access information about the one or more 3D reference models of the item comprise instructions that cause the at least one processor to receive, from the service provider device, the information about the one or more 3D reference models (See ¶101, “The embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. However, preferably, these embodiments are implemented in computer programs executing on programmable computers, each comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. For example, and without limitation, the programmable computer may be a mainframe computer, server, personal computer, laptop, personal data assistant, or cellular telephone. A program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.” The examiner notes any component with the system may be considered to be a service provider device. See ¶ 133: "[…]the convex hull of the user model is used to determine apparel that would best fit/suit the user […]" ¶ 146: "Body measurements specified by a user are used by the system to estimate and suggest apparel size that best meets the user's fit needs […] the system suggests dress pants that would best fit the user […]" ¶ 232: "for determining apparel goodness of fit on a user model, the convex hull of the model is compared with the volume occupied by a given piece of clothing" claim 15: "c) recommending vendor products that best match the users […]’ personal data."). Claim 39 recites similar limitations to that of claim 28 and thus is rejected under similar rationale as detailed above. Claim 40 recites similar rationale as detailed to be detailed Claim 21 and 37 as detailed above but both claims don’t explicitly disclose a non-transitory computer-readable medium comprising executable instructions which […]. Siddique teaches a non-transitory computer-readable medium comprising executable instructions which […] (¶102-103). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J CRADDOCK whose telephone number is (571)270-7502. The examiner can normally be reached Monday - Friday 10:00 AM - 6 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT J CRADDOCK/Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597214
SCANNABLE CODES AS LANDMARKS FOR AUGMENTED-REALITY CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597101
IMAGE TRANSMISSION SYSTEM, IMAGE TRANSMISSION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579767
AUGMENTED-REALITY SYSTEMS AND METHODS FOR GUIDED INSTALLATION OF MEDICAL DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579792
ELECTRONIC DEVICE FOR OBTAINING IMAGE DATA RELATING TO HAND MOTION AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12555331
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+14.4%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 616 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month