Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,387

PARTIALLY DISPLAY-LOCKED VIRTUAL OBJECTS

Non-Final OA §103
Filed
Mar 25, 2024
Examiner
YANG, YI
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Spacecraft Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
295 granted / 415 resolved
+9.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
454
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-10, 14 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Palangie U.S. Patent Application 20190333278 in view of Harding U.S. Patent Application 20220335697. Regarding claim 19, Palangie discloses a device comprising: a display (display 120); non-transitory memory (memory 106); and one or more processors (processor 102) to: determine a first display location in a two-dimensional display coordinate system for a first portion of a virtual object (paragraph [0058]: in FIG. 3A, a CGR environment 300 (two-dimensional display) (e.g., based on mixed reality) includes a plurality of virtual objects 302-306 (e.g., a virtual object 302 of a chair (first portion), a virtual object 304 of a keyboard, a virtual object 306 of a monitor) and a plurality of real or virtual representations 308-310 of real objects in the real environment); detect an object at an object location in a three-dimensional world coordinate system; determine, based on the object location, a first world location in the three-dimensional world coordinate system for a second portion of the virtual object (paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance (three-dimensional world coordinate system) from virtual object 304 (e.g., a virtual keyboard), device 100a displays (or causes display of), on display 120, a visual feedback 316 (second portion) on virtual object 304 similar to visual feedback 314 on virtual object 302 (first portion)); determine a second display location in the two-dimensional display coordinate system for the second portion of the virtual object based on the first world-location in the three-dimensional world coordinate system (paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance (three-dimensional world coordinate system) from virtual object 304 (e.g., a virtual keyboard), device 100a displays (or causes display of), on display 120 (two-dimensional display coordinate system), a visual feedback 316 (second display location) on virtual object 304 similar to visual feedback 314 on virtual object 302; paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment); and display, on the display, the virtual object, wherein the first portion is displayed at the first display location and the second portion is displayed at the second display location (paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard), device 100a displays (or causes display of), on display 120, a visual feedback 316 (second portion) on virtual object 304 similar to visual feedback 314 on virtual object 302 (first portion)). Palangie discloses all the features with respect to claim 19 as outlined above. Palangie further discloses the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display (paragraph [0037]). However, Palangie fails to disclose determining display location based on the first pose of the device explicitly. Harding discloses determining a display location in a two-dimensional display coordinate system for a virtual object (paragraph [0283]: the one or more virtual elements are displayed (712) at a predetermined location relative to the respective position of the human subject); and determining display location based on the first pose of the device (paragraph [0070]: at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system)). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Claim 1 recites the functions of the apparatus recited in claim 19 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 19 applies to the method steps of claim 1. Regarding claim 2, Palangie as modified by Harding discloses the method of claim 1, further comprising: determining a third display location in the two-dimensional display coordinate system for the second portion of the virtual object based on the first world location in the three-dimensional world coordinate system and a second pose of the device (Harding’s paragraph [0070]: at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system); Palangie’s paragraph [0037]: the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display; paragraph [0081]: a magnitude (third display location between first and second location) of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment); and displaying, on the display, the virtual object, wherein the first portion is displayed at the first display location and the second portion is displayed at the third display location (Palangie’s paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard), device 100a displays (or causes display of), on display 120, a visual feedback 316 (second portion) on virtual object 304 similar to visual feedback 314 on virtual object 302 (first portion); paragraph [0081]: a magnitude (third display location between first and second location) of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 3, Palangie as modified by Harding discloses the method of claim 1, wherein detecting the object includes detecting a real object (Palangie’s paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard), device 100a displays (or causes display of), on display 120, a visual feedback 316 on virtual object 304 similar to visual feedback 314 on virtual object 302). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 4, Palangie as modified by Harding discloses the method of claim 3, wherein detecting the real object includes detecting the real object in an image of a physical environment (Palangie’s paragraph [0044]: the user-controlled real object (e.g., the user's hand) corresponding to representation 212 is detected via one or more sensors (e.g., image sensors) of device 100a, for instance, when the user-controlled real object is in the field of vision of the one or more sensors of device 100a). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 5, Palangie as modified by Harding discloses the method of claim 1, wherein detecting the object includes detecting another virtual object (Palangie’s paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard) (another virtual object), device 100a displays (or causes display of), on display 120, a visual feedback 316 on virtual object 304 similar to visual feedback 314 on virtual object 302). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 6, Palangie as modified by Harding discloses the method of claim 1, wherein the first world location is within a threshold distance of the object location (Harding’s paragraph [0070]: at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system); Palangie’s paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard) (another virtual object), device 100a displays (or causes display of), on display 120, a visual feedback 316 on virtual object 304 similar to visual feedback 314 on virtual object 302; paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 7, Palangie as modified by Harding discloses the method of claim 1, wherein the first world location surrounds the object location (Harding’s paragraph [0070]: at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system); Palangie’s paragraph [0066]: (FIG. 3D) upon detecting (e.g., via one or more internal and/or external image sensors) that the user-controlled real object (e.g., the user's hand) is within the threshold distance from virtual object 304 (e.g., a virtual keyboard) (another virtual object), device 100a displays (or causes display of), on display 120, a visual feedback 316 on virtual object 304 similar to visual feedback 314 on virtual object 302; paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 8, Palangie as modified by Harding discloses the method of claim 1, wherein displaying the virtual object includes displaying an animation of the virtual object extending between the first display location and the second display location (Harding’s paragraph [0280]: In response to receiving the request to add the virtual effect, the computer system adds (706) the virtual effect (e.g., a virtual animation) to the displayed representation of the field of view of the one or more cameras; Palangie’s paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment; paragraph [0076]: at block 412, in accordance with a determination that the distance criteria is satisfied and the virtual object (e.g., 308) corresponds to the second real object, the electronic device (e.g., 100a) forgoes removing the portion of the displayed virtual object). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 9, Palangie as modified by Harding discloses the method of claim 1, wherein displaying the virtual object is performed in response to determining that one or more display criteria are satisfied (Palangie’s paragraph [0074]: At block 410, in accordance with a determination that a distance criteria is satisfied and the virtual object (e.g., 202, 204, 206, 208, 210, 302, 304, 306, 310) does not correspond to a second real object detected using the one or more sensors, the electronic device (e.g., 100a) removes a portion (e.g., 214, 216, 218, 314, 316) of the displayed virtual object, where the distance criteria comprises a threshold distance between the second location and the virtual object). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 10, Palangie as modified by Harding discloses the method of claim 9, wherein at least one of the one or more display criteria is satisfied when a spatial relationship between the device and the object satisfies one or more spatial-relationship criteria (Palangie’s paragraph [0074]: At block 410, in accordance with a determination that a distance criteria is satisfied and the virtual object (e.g., 202, 204, 206, 208, 210, 302, 304, 306, 310) does not correspond to a second real object detected using the one or more sensors, the electronic device (e.g., 100a) removes a portion (e.g., 214, 216, 218, 314, 316) of the displayed virtual object, where the distance criteria comprises a threshold distance between the second location and the virtual object; Harding’s paragraph [0070]: at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system)). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 14, Palangie as modified by Harding discloses the method of claim 1, further comprising determining a third display location in the two-dimensional display coordinate system for a third portion of the virtual object, wherein the third portion is displayed at the third display location (Harding’s paragraph [0231]: device 100 applies the virtual effect to the camera view of the portion of the physical environment that has now been scanned, as illustrated in FIG. 5AK; paragraph [0273]: the prism virtual effect is only applied to the portion of the physical environment that includes the couch after scanning the additional portion of the representation of the physical environment; see fig. 5AK, the prism virtual effect is applied to ceiling, wall and floor; Palangie’s paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 17, Palangie as modified by Harding discloses the method of claim 14, wherein the virtual object is a continuous between the first portion, the third portion, and the second portion (Harding’s paragraph [0231]: device 100 applies the virtual effect to the camera view of the portion of the physical environment that has now been scanned, as illustrated in FIG. 5AK; paragraph [0273]: the prism virtual effect is only applied to the portion of the physical environment that includes the couch after scanning the additional portion of the representation of the physical environment; see fig. 5AK, the prism virtual effect is applied to ceiling, wall and floor). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Regarding claim 18, Palangie as modified by Harding discloses the method of claim 14, wherein the virtual object includes a plurality of discrete segments including the first portion, the third portion, and the second portion (Harding’s paragraph [0231]: device 100 applies the virtual effect to the camera view of the portion of the physical environment that has now been scanned, as illustrated in FIG. 5AK; paragraph [0273]: the prism virtual effect is only applied to the portion of the physical environment that includes the couch after scanning the additional portion of the representation of the physical environment; see fig. 5AK, the prism virtual effect is applied to ceiling, wall and floor). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie’s to display virtual elements based on pose of the device as taught by Harding, to display virtual effects using augmented reality environments faster and more efficiently. Claim 20 recites the functions of the apparatus recited in claim 19 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 19 applies to the medium steps of claim 20. Claim 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Palangie U.S. Patent Application 20190333278 in view of Harding U.S. Patent Application 20220335697, and further in view of Wang U.S. Patent Application 20210019036. Regarding claim 11, Palangie as modified by Harding discloses transforming the virtual object include a plurality of virtual sub-objects; and in response to determining that one or more collection criteria are satisfied for a particular virtual sub-object of the plurality of virtual sub-objects, ceasing to display the virtual sub-object (Palangie’s paragraph [0076]: at block 412, in accordance with a determination that the distance criteria is satisfied and the virtual object (e.g., 308) corresponds to the second real object, the electronic device (e.g., 100a) forgoes removing the portion of the displayed virtual object). However, Palangie as modified by Harding fails to disclose virtual sub-objects explicitly. Wang discloses virtual sub-objects (paragraph [0038]: the first virtual object 150 is a complex virtual object, comprising a plurality of component virtual objects (which may be referred to as “subobjects” of the complex virtual object) including a first component virtual object 154). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie and Harding’s to use virtual sub-objects as taught by Wang, to interact with a three-dimensional virtual environment conventionally. Regarding claim 12, Palangie as modified by Harding and Wang discloses the method of claim 11, wherein at least one of the one or more collection criteria is satisfied when a spatial relationship between the device and the virtual sub-object satisfies one or more spatial-relationship criteria (Palangie’s paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment; Wang’s paragraph [0062]: a rotation performed on the third object 650 will result in the same rotation being applied to the second object space orientation 652, thereby changing the second object space orientation 652 from the third orientation 660 to a different orientation (see FIG. 7) while the component virtual objects 654 maintain their poses relative to the rotated second object space orientation 652). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie and Harding’s to use virtual sub-objects as taught by Wang, to interact with a three-dimensional virtual environment conventionally. Regarding claim 13, Palangie as modified by Harding and Wang discloses the method of claim 12, wherein at least one of the one or more spatial-relationship criteria is satisfied when the device contacts the virtual sub-object (Palangie’s paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment; Wang’s paragraph [0062]: a rotation performed on the third object 650 will result in the same rotation being applied to the second object space orientation 652, thereby changing the second object space orientation 652 from the third orientation 660 to a different orientation (see FIG. 7) while the component virtual objects 654 maintain their poses relative to the rotated second object space orientation 652). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie and Harding’s to use virtual sub-objects as taught by Wang, to interact with a three-dimensional virtual environment conventionally. Claim 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Palangie U.S. Patent Application 20190333278 in view of Harding U.S. Patent Application 20220335697, and further in view of Xu U.S. Patent Application 20210255485. Regarding claim 15, Palangie as modified by Harding discloses all the features with respect to claim 14 as outlined above. However, Palangie as modified by Harding fails to disclose determining the third display location is based on an interpolation between the first display location and the second display location. Xu discloses determining the third display location is based on an interpolation between the first display location and the second display location (paragraph [0233]: The interpolation will map the virtual object to an initial (or first) target location and first time (s=0 and t=0, respectively) and to a new (or second) target location and second time). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie and Harding’s to interpolate between locations as taught by Xu, to speed curves for the movement of virtual objects. Regarding claim 16, Palangie as modified by Harding and Xu discloses the method of claim 14, wherein determining the third display location includes determining a second world location in the three-dimensional world coordinate system for the first portion and determining the third display location is based on an interpolation between the second world location and the first world location (Xu’s paragraph [0233]: The interpolation will map the virtual object to an initial (or first) target location and first time (s=0 and t=0, respectively) and to a new (or second) target location and second time; Palangie's paragraph [0081]: a magnitude of the portion of the displayed virtual object (e.g., the area of the displayed virtual object) that is removed is determined based on a distance between the second location in the CGR environment and the displayed virtual object in the CGR environment; paragraph [0037]: the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Palangie and Harding’s to interpolate between locations as taught by Xu, to speed curves for the movement of virtual objects. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /YI YANG/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Oct 02, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586304
PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567129
Image Processing Method and Electronic Device
2y 5m to grant Granted Mar 03, 2026
Patent 12561276
SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541902
SIGN LANGUAGE GENERATION AND DISPLAY
2y 5m to grant Granted Feb 03, 2026
Patent 12541896
COMPUTER-BASED CONTENT PERSONALIZATION OF A VISUAL DISPLAY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month