Prosecution Insights
Last updated: April 19, 2026
Application No. 18/647,643

AUGMENTED REALITY DEVICE FOR PROVIDING GUIDE FOR USER'S ACTIVITY, AND OPERATING METHOD THEREOF

Non-Final OA §102§103
Filed
Apr 26, 2024
Examiner
THOMAS, SOUMYA
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to because of the following informalities: In FIG. 7A, reference number 700 (see paragraph [140] of spec, “graphic UI 700”) is not shown in the drawing. In FIG. 7B, reference number 702 (see paragraph [150] of spec, “graphic UI 702”) is not shown in the drawing. Furthermore, the “-10” graphic shown should read “+10” (see paragraph [0150], “For example, when only plastic bottle holding is performed by the user from among the plurality of detailed operations, the processor 140 may control the display 162 to output a number representing a reward (“+ 10”) at 10 % brightness”). In FIG. 7C, The “-10” graphic shown for reference number 704 should read “+10” (see paragraph [0156], “For example, the processor 140 may obtain reward information in which + 10 points are displayed at 10 % brightness for an operation of emptying a plastic bottle, + 10 points are displayed at 90 % brightness when a plastic bottle label removal operation is performed, and + 10 points are displayed at 100 % brightness when plastic bottle emptying, label removal, and crumpling and recycling operations are all performed”). FIG. 8, in block S820, “AIMGE” should read “IMAGE”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: In paragraph [0068], “the communication interface 110” is repeated twice. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2-7, 9 and 11-15 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Trehan et al. (US Pub No 2022/0072380), hereinafter Trehan. As to Claim 1, Trehan teaches a method for providing a guide for user activity (see paragraph [0010], “The method further includes generating, by the AI model, feedback for the user based on comparison of the set of user performance parameters with the set of target activity performance parameters. The feedback includes at least one of corrective actions or alerts”, where the feedback is a guide for user activity), performed by an augmented reality device (see paragraph [0008] , “it is beneficial to use a mirror display, Artificial Intelligence (AI), and Augmented Reality (AR) technology to solve such a problem”, where the mirror display is the augmented reality device). obtaining, from a server of a policy provider, a policy (see paragraph [0100], “The disclosed methods and systems may be implemented on a conventional or a general-purpose computer system, such as a personal computer (PC) or server computer”, where the ‘server computer’ is the policy provider, and see paragraph [0030], “The memory may also store various data (for example, AI model data, a plurality of activity types, a plurality of activities, multimedia data, set of user performance parameters, target activity performance data, and the like”, where the ‘target activity performance data’ is the policy), comprising at least one activity about at least one of a location, a time, a space, or an object (see paragraph [0036], “The gymnasium may include, for example, multiple exercise machines and equipment for performing multiple activities by the user. The user may use a smart mirror (for example, the smart mirror 100) or any other display device to select an activity from the activity categories and may correspondingly select an activity attribute that is associated the activity”, where the ‘object’ is the machines, and the ‘activity’ is relative to the ‘object’), inputting an image obtained through a camera to an artificial intelligence model (see paragraph [0010], “The method further includes capturing in real-time, via at least one camera, multimedia data of current activity performance of the user corresponding to the activity type and the activity”, and see paragraph [0010], “The method further includes processing in real-time, by an Artificial Intelligence (AI) model, the captured multimedia data” where the multimedia data includes the image of the user), and recognizing, from the image, at least one activity of a user (see paragraph [0082], “the AI model of the smart mirror (such as, the smart mirror 100) may determine whether the initial pose is correct. Further, the GUI 1000 displays a message 1008 for the user that the pose is recognized successfully for the exercise.” ), determining a policy performance level of the user by comparing the recognized at least one activity of the user with the at least one activity in the policy (see paragraph [0010], “The method further includes comparing, by the AI model, the set of user performance parameters with a set of target activity performance parameters” and see paragraph [0083], “By way of an example, the GUI 1100 may display rep/step counters, percentage of reps completed, percentage of exercise completed, heart rate of the user, calories burnt by the user, or any other user performance parameter”, where the ‘user performance parameter’ is the policy performance level), and outputting, based on the policy performance level a graphic user interface (UI) for providing the guide for the user activity on the policy (see paragraph [0074], “Further, the process 300 includes displaying the set of user performance parameters, the set of target activity performance parameters, and the target activity performance of the activity expert through the GUI”, where GUI stands for graphic user interface). As to Claim 3, Trehan teaches identifying a trigger point by monitoring whether at least one of the location, the time, the space, the object, or at least one activity of the user recognized from the image (see Trehan, paragraph [0080], “Referring now to FIG. 10, an exemplary GUI 1000 displaying current activity performance 1002 and pose skeletal model 1004 of a user is illustrated, in accordance with some embodiments. When the user acknowledges the message 908 and gets into an initial pose for the exercise, the AI model of the smart mirror (such as, the smart mirror 100) may determine whether the initial pose is correct. Further, the GUI 1000 displays a message 1008 for the user that the pose is recognized successfully for the exercise. The message 1008 may also be provided as an audio output. The user may be notified via text, graphic, visual, haptic, or audio output to begin the exercise”, where the initial pose is an activity of the user which is recognized). Trehan teaches that wherein the determining the policy performance level comprises, based on the trigger point being identified, determining the policy performance level by comparing the recognized at least one activity of the user with the at least one activity in the policy (see paragraph [0083], “When the user begins performing the exercise, the multimedia data associated with the current activity performance 1102 of the user is analyzed by the AI model of the smart mirror (such as, the smart mirror 100). The multimedia data of the current activity performance 1102 is compared with a target activity performance 1104 of the activity expert”). As to Claim 4, Trehan teaches wherein the determining the policy performance level comprises: comparing at least one activity of the user recognized in real time from the image with at least one activity in the policy (see paragraph [0029], “The smart mirror 100 further captures in real-time, via at least one camera, multimedia data of current activity performance of the user 102 corresponding to the activity type and the activity. The smart mirror 100 further processes in real-time, by an Artificial Intelligence (AI) model…. The smart mirror 100 further compares, by the AI model, the set of user performance parameters with a set of target activity performance parameters.”), and determining whether the at least one activity of the user recognized in real time and the at least one activity in the policy correspond to each other (see paragraph [0080], “Referring now to FIG. 10, an exemplary GUI 1000 displaying current activity performance 1002 and pose skeletal model 1004 of a user is illustrated, in accordance with some embodiments. When the user acknowledges the message 908 and gets into an initial pose for the exercise, the AI model of the smart mirror (such as, the smart mirror 100) may determine whether the initial pose is correct. Further, the GUI 1000 displays a message 1008 for the user that the pose is recognized successfully for the exercise”), and updating a value of the policy performance level in real time by calculating the policy performance level based on whether the at least one activity of the user recognized in real time and the at least one activity in the policy correspond to each other (see paragraph [0083], “The multimedia data of the current activity performance 1102 is compared with a target activity performance 1104 of the activity expert... When the user successfully moves from an initial pose to a subsequent pose, the rep/step counter will change in value from “1” to “2”, denoting that the user is now on second step”, where the ‘rep count’ is the value updated based on the user’s activity). As to Claim 5, Trehan teaches wherein the outputting the graphic UI comprises displaying a graphic UI representing a reward determined based on the updated policy performance level (see paragraph [0059], “In some configurations, scores related to user activities may be presented on a leader board as points for various users who use smart mirrors 100 and/or display devices. Badges may also be assigned to various users based on level of activities performed by them and may be displayed on social media platforms”). As to Claim 6, Trehan teaches wherein the outputting the graphic UI comprises updating the graphic UI, based on the updated policy performance level (see paragraph [0083], “The multimedia data of the current activity performance 1102 is compared with a target activity performance 1104 of the activity expert. By way of an example, the GUI 1100 may display rep/step counters, percentage of reps completed, percentage of exercise completed, heart rate of the user, calories burnt by the user, or any other user performance parameter. When the user successfully moves from an initial pose to a subsequent pose, the rep/step counter will change in value from “1” to “2”, denoting that the user is now on second step.”, where the GUI is updated by displaying a new value for the rep counter). As to Claim 7, Trehan teaches an external device may be used to obtain location and object information about an activity (see paragraph [0031], “In some embodiments, the smart mirror 100 may interact with the one or more external devices over a communication network” and see paragraph [0085], “The cameras may be used to track and record the activity of the user in the gymnasium as the user moves from one area or from one machine to another for performing various activities”, where the cameras are external devices, the ‘location’ is the gym, and the machines are ‘objects’ used for activities). Trehan further teaches that the data from the external cameras can be compared to information in the policy to determine a policy performance level of the user (see paragraph [0084], “The gymnasium may include, for example, multiple exercise machines and equipment for performing multiple activities by the user. The user may use a smart mirror (for example, the smart mirror 100) or any other display device to select an activity from the activity categories and may correspondingly select an activity attribute that is associated the activity. The multiple cameras may capture the activity of the user and may provide relevant instructions and feedback to the user for improvising the activities being performed”, where the multiple cameras are external cameras). As to Claim 11, Claim 11 claims the same limitation as Claim 3 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 3. As to Claim 12, Claim 12 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 4. As to Claim 13, Claim 13 claims the same limitation as Claim 5 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 5. As to Claim 14, Claim 14 claims the same limitation as Claim 6 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 6. As to Claim 15, Claim 15 claims the same limitation as Claim 7 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 7. As to Claim 9, Trehan teaches an augmented reality device for providing a guide for user activity (see paragraph [0008] , “it is beneficial to use a mirror display, Artificial Intelligence (AI), and Augmented Reality (AR) technology to solve such a problem”, where the mirror display is the augmented reality device), the device comprising: a communication interface configured to perform data communication with a server of a policy provider (see paragraph [0104], “The computing system 1200 may also include a communications interface 1218. The communications interface 1218 may be used to allow software and data to be transferred between the computing system 1200 and external devices”, and see paragraph [0100], “The disclosed methods and systems may be implemented on a conventional or a general-purpose computer system, such as a personal computer (PC) or server computer”, where the ‘server computer’ is the policy provider); a camera configured to obtain an image by an object in a real space and a part of a body of a user; (see paragraph [0093], “The method includes detecting a user, determining pose and body movement of a user using a camera,” and see paragraph [0085], “The cameras may be used to track and record the activity of the user in the gymnasium as the user moves from one area or from one machine to another for performing various activities”, where the gymnasium is a real space, and the machines are objects ); a display (see paragraph [0064], “In an embodiment, the GUI may be rendered on the display 214 via the GUI module 220.”) and at least one processor (see paragraph [0030], “Further, the memory may store instructions that, when executed by the one or more processors, cause the one or more processors to analyse activity performance of the user 102 in real-time through the smart mirror 100”) control the communication interface to receive, from the server of the policy provider, a policy comprising at least one activity (see paragraph [0030], “The memory may also store various data (for example, AI model data, a plurality of activity types, a plurality of activities, multimedia data, set of user performance parameters, target activity performance data, and the like”, where the ‘target activity performance data’ is the policy), comprising at least one activity defined in connection with at least one of a location, a time, a space, or an object [0084], “The gymnasium may include, for example, multiple exercise machines and equipment for performing multiple activities by the user. The user may use a smart mirror (for example, the smart mirror 100) or any other display device to select an activity from the activity categories and may correspondingly select an activity attribute that is associated the activity”, where the ‘object’ is the machines, and the ‘activity’ is relative to the ‘object’); input the image, obtained through the camera, to an artificial intelligence model (see paragraph [0010], “The method further includes processing in real-time, by an Artificial Intelligence (AI) model, the captured multimedia data” where the multimedia data includes the image of the user), and recognize, from the image , at least one activity of the user interacting with at least one of a location, a time, a space, or an object, by using the artificial intelligence model (see paragraph [0082], “the AI model of the smart mirror (such as, the smart mirror 100) may determine whether the initial pose is correct. Further, the GUI 1000 displays a message 1008 for the user that the pose is recognized successfully for the exercise”, and see paragraph [0084], “The user may use a smart mirror (for example, the smart mirror 100) or any other display device to select an activity from the activity categories and may correspondingly select an activity attribute that is associated the activity. The multiple cameras may capture the activity of the user and may provide relevant instructions and feedback to the user for improvising the activities being performed”, thus implying that the activity of interacting with the machine is recognized); determine a policy performance level of the user by comparing the recognized at least one activity of the user with the at least one activity in the policy (see paragraph [0010], “The method further includes comparing, by the AI model, the set of user performance parameters with a set of target activity performance parameters” and see paragraph [0083], “By way of an example, the GUI 1100 may display rep/step counters, percentage of reps completed, percentage of exercise completed, heart rate of the user, calories burnt by the user, or any other user performance parameter”, where the ‘user performance parameter’ is the policy performance level); and control the display to output, based on the policy performance level, a graphic user interface (UI) for providing the guide for the user activity on the policy (see paragraph [0074], “Further, the process 300 includes displaying the set of user performance parameters, the set of target activity performance parameters, and the target activity performance of the activity expert through the GUI”, where GUI stands for graphic user interface). Claims 2 and 10 and are rejected under 35 U.S.C. 103 as being unpatentable over Trehan et al. (US Pub No 2022/0072380), hereinafter Trehan in view of Rakshit et al. (US Pub No 2021/0273892), and further in view of Newman et al (US Pub No 2023/0026823), hereinafter Newman. As to Claim 2, Trehan fails to teach obtaining, by analyzing text in the obtained policy, a plurality of detailed operations for recognizing the at least one activity defined by the policy and information about a sequence of the plurality of detailed operations. However, Rakshit teaches an augmented reality device (see abstract), that can analyze text from a policy (see paragraph [0030], “In various embodiments of the present invention, chatbot 112 can analyze historically gathered videos, images, and documents (e.g., captured user dialog and/or retrieved instructions or explanations), wherein chatbot 112 can create knowledge corpus 114 based on the analyzed videos, images, and documents using machine learning”, where the documents contain text, and the knowledge corpus is the ‘policy’). Rakshit further teaches that the knowledge corpus can be used to obtain a plurality of detailed operations for completing an objective by using the policy and information about a sequence of the plurality of detailed operations (see paragraph [0013], “The knowledge corpus can recommend one or more solutions to the user, where the one or more solutions comprise identified actions (e.g., a list of user recommended actions) that can be visually presented to the user”, where the list of user recommended actions are the ‘detailed operations’). Rakshit is combinable with Trehan as both are from the analogous field of image analysis and augmented reality. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the text analysis taught by Rakshit with the method of providing a user guide taught by Trehan. The motivation for doing so would be to provide solutions for the user by using data obtained from the text analysis. Rakshit teaches in paragraph [0031], “In various embodiments of the present invention knowledge corpus 114 can determine and output solutions or actions steps for the user for the identified user problem and/or user activity based on the identified user query and identified and retrieved information associated with the identified visual information.” The retrieved information can include the data gathered from analyzing documents. Thus, it would have been obvious to combine the text analysis taught by Rakshit with the teachings of Trehan. Rakshit teaches that the knowledge corpus can be used to predict how the user will interact with the environment (see paragraph [0029]). However, Rakshit fails to explicitly teach that the detailed operations can be used to recognize the at least one activity. Instead, to recognize if the user has completed the activity, the augmented reality device monitors biometric signals, or asks the user for input (see paragraph [0051]). From an analogous art, Newman teaches an augmented reality device (see abstract) that can obtain a policy (see paragraph [0037], “At step 120 one or more of the systems described herein may define, based on identifying the plurality of objects, an object-manipulation objective”, where the objective is the policy), and then determine a plurality of detailed operations for the at least one activity defined by the policy and information about a sequence of the plurality of detailed operations (see paragraph [0048], “at step 130 one or more of the systems described herein may determine an action sequence that defines a sequence of action steps for manipulating the at least one of the plurality of objects to complete the object-manipulation objective. For example, sequence module 208 may determine action sequence 228 for manipulating objects 222 into end states 226 to achieve objectives 224”), and that these operations can be used to recognized when a user has completed an activity (see paragraph [0060], “For example, the notification may provide instructions or descriptions of one or more actions steps of action sequence 228. The notification may provide status updates, such as a percentage of completion of one or more action steps, changes to one or more relevant objects.”, thus implying that the activity is recognized by the augmented reality device). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the activity recognition taught by Newman with the teachings of Trehan and Rakshit. The motivation for doing so would be to provide users with more efficient guides. Newman teaches in paragraph [0017- 0018], “People often perform routine tasks, such as household chores, packing for a trip, etc., with little to no preplanning…. Thus, when the user is performing a task, an artificial-reality device may be able to analyze the user's environment and provide feedback in real time. By leveraging the computational resources of artificial-reality devices, the user may be able to perform the task more efficiently”). Thus, it would have been obvious to combine the teachings of Trehan, Rakshit, and Newman in order to obtain the invention as claimed in Claim 2. As to Claim 10, Claim 10 claims the same limitation as Claim 2 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 2. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Trehan et al. (US Pub No 20220072380), hereinafter Trehan in view of Rakshit et al. (US Pub No 20210273892) As to Claim 8, Trehan teaches wherein the obtaining the policy comprises receiving a plurality of policies (see Trehan, paragraph [0011], “The GUI is configured to render a plurality of activity types. Each of the plurality activity types includes a plurality of activities”, where each ‘activity type’ is a policy, and wherein the determining the policy performance level comprises: determining the policy from among the plurality of received policies based on a priority set by the user input (see paragraph [0011], “The GUI is further configured to receive, via a user command, a user selection of at least one of an activity type from the plurality of activity types and an activity from the plurality of activities associated with activity type.”) ; and determining the policy performance level by comparing at least one activity defined by the determined policy with at least one activity of the user recognized from the image. (see paragraph [0011], “The processor-executable instructions, on execution, further cause the processor to generate, by the AI model, feedback for the user based on comparison of the set of user performance parameters with the set of target activity performance parameters”). Trehan fails to explicitly teach that plurality of polices are obtained from servers of a plurality of policy providers. However, Rakshit teaches an augmented reality system for instructing a user (see abstract) which can use a variety of different sources to obtain a policy (see paragraph [0024], “In other embodiments, server computer 120 can represent a server computing system utilizing multiple computers such as, but not limited to, a server system, such as in a cloud computing environment”, and see paragraph [0030], “In various embodiments of the present invention, knowledge corpus 114 can be created based on various sources of information on any particular domain previously accessed and/or stored in local storage 104 and/or shared storage 124 by chatbot 112 including retrieved data and user interaction”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the multiple servers and sources taught by Rakshit with the teachings of Trehan. The motivation for doing so would be to better support the users by retrieving information. Rakshit teaches in paragraph [0031], “In various embodiments of the present invention, knowledge corpus 114 can support the user with textual or audio interaction by retrieving information”. Thus, it would have been obvious to combine the plurality of policy providers taught by Rakshit with the teachings Trehan in order to obtain the invention as claimed in Claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bruso et al. (US Pub No 2023/0036101) teaches a method for creating an augmented reality guide that includes analyzing text, images, and video to create a ‘AR pattern’ which can be used to guide a user. Hong et al (US Pub No 2022/0362631) teaches an augmented reality device that compares image and motion data to reference data to determine if the user has successfully completed an activity. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOUMYA THOMAS whose telephone number is (571)272-8639. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.T./ Examiner, Art Unit 2664 /JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Mar 02, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month