Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,550

PROCESSING METHOD FOR IMAGE RECOGNITION MODEL AND RELATED PRODUCT

Non-Final OA §103
Filed
Feb 26, 2024
Examiner
PHAM, ANNIE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Mashang Consumer Finance Co. Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
35.0%
-5.0% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The instant application claims the priority to benefit based on continuation to PCT Application No. PCT/CN2023/109265, filed on 07/26/2023, and priority to and benefit from foreign Application No. CN 2022109183854 filed on 08/01/2022. Information Disclosure Statement The information disclosure statements (“IDS”) filed on 02/26/2024 and 04/14/2025 were reviewed and the listed references were noted. Drawings The 6 page drawings have been considered and placed on record in the file. Status of Claims Claims 1-20 are currently pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 7-14 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Huang (CN 113269257 with the effective filing date of 05/27/2021) in view of Yang ( “Region-aware Random Erasing” with publishing year of 2019). Consider Claim 1, Huang discloses “A processing method for an image recognition model, comprising: obtaining an image sample;” (Huang, Pg 2, the paragraph starting with “obtaining the image sample”) “determining a target object positioning box of the image sample, wherein the target object positioning box covers an area where an effective feature of the image sample is located;” (Huang, Pg 2, “the image sample is marked with a region of interest and a marked frame coordinate and class tag of the target region”) “(Huang, Pg 2, the paragraph starting with “training the pre-established depth convolutional neural network module according to the image sample, obtaining the image recognition model”). Huang does not teach “adjusting a pixel value of at least one pixel within the target object positioning box to obtain a preprocessed image sample”. However in an analogous field, Yang teaches “adjusting a pixel value of at least one pixel” (Yang, Section III, “If we randomly change a part of the pixels in the object area, the part of region is similar to occlusion”) “within the target object positioning box to obtain a preprocessed image sample” (Yang, Section II.A, “For ORE, it randomly erases a part of region in the bounding box”). Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Huang with the teachings of Yang to further train an image recognition model by creating occlusions in input images. One of ordinary skill in the art would be motivated to combine Huang and Yang to create occlusions in training images to encourage a more robust image recognition model and prevent overfitting (Yang, Section V. Conclusion, 1st sentence). Accordingly, the combination of Huang and Yang discloses the invention of Claim 1. Consider Claim 2, the combination of Huang and Yang discloses “The method according to claim 1, further comprising: obtaining an image sample set, wherein the image sample set comprises at least one of the image sample.” (Huang, Pg 3, “As an improvement of the solution, obtaining the image sample by the following steps, comprising: according to the pre-defined interest region feature and target region feature, dividing the region of interest in the pre-collected image and the target region therein, and marking the marking frame coordinate and type tag of the target region to form the image data set”) (emphasis added). Consider Claim 3, the combination of Huang and Yang discloses “The method according to claim 2, wherein the adjusting the pixel value of the at least one pixel within the target object positioning box to obtain the preprocessed image sample comprises: adjusting a pixel value of at least one pixel” (Yang, Section III, “If we randomly change a part of the pixels in the object area, the part of region is similar to occlusion”) “within the target object positioning box to obtain a preprocessed image sample” (Yang, Section II.A, “For ORE, it randomly erases a part of region in the bounding box”) “and forming the at least one preprocessed image sample into a preprocessed image sample set.” (Huang, Pg 7, “collecting and forming a first training image set with the training image sample”). The proposed combination as well as the motivation for combining the Huang and Yang references presented in the rejection of claim 1, apply to claim 3 and are incorporated herein by reference. Thus, the method recited in claim 3 is met by Huang and Yang. Consider Claim 4, the combination of Huang and Yang discloses “The method according to claim 1, wherein the determining the target object positioning box of the image sample comprises: inputting the image sample into a self-supervised learning model” (Huang, Pg 2, “training the pre-established depth convolutional neural network module according to the image sample, obtaining the image recognition model”) Examiner notes the pre-established convolution neural network is interpreted to be self-supervised. “to obtain feature map information,” (Huang, Pg 2, “inputting the image to be identified to the image recognition model, obtaining the category probability and relative position coordinate of each target area in the image to be identified”) “wherein the self-supervised learning model is used to extract the effective feature of the image sample;” (Huang, Pg 2, “obtaining the category probability and relative position coordinate of each target area in the image to be identified”) Examiner notes position coordinates of the bounding box is interpreted as feature map information. “and determining the target object positioning box of the image sample according to the feature map information.” (Huang, Pg 2-3, “based on the area selection model, performing sliding window classification and object boundary frame coordinate regression to the characteristic tensor; identifying the probability of interest area in the image to be identified comprising target area and position and size of the target area”) (emphasis added). Consider Claim 7, the combination of Huang and Yang discloses “The method according to claim 3, wherein the adjusting the pixel value of the at least one pixel within the target object positioning box of each image sample, and forming the at least one preprocessed image sample into the preprocessed image sample set comprises: for each image sample, generating an associated image corresponding to the image sample according to the target object positioning box of the image sample, wherein a size of the associated image is identical to a size of the image sample;” (Yang, Figure 1 (See image below)). Examiner notes the input image and the associated occluded images are interpreted to be the same size. “and adjusting the pixel value of the at least one pixel within the target object positioning box of each image sample according to the associated image corresponding to each image sample to obtain the preprocessed image sample set.” (Yang, Figure 1 (See image below)). Examiner notes the adjusted pixel values within the bounding box is represented by the grey occlusion within the red bounding box within the third image on the right hand side of Figure 1. The proposed combination as well as the motivation for combining the Huang and Yang references presented in the rejection of claim , apply to claim 7 and are incorporated herein by reference. Thus, the method recited in claim 7 is met by Huang and Yang. PNG media_image1.png 577 1463 media_image1.png Greyscale Consider Claim 8, the combination of Huang and Yang discloses “The method according to claim 7, wherein the generating the associated image corresponding to the image sample according to the target object positioning box of the image sample comprises: generating an initial associated image with a same size as the image sample;” (Yang, Figure 1 (See image above)). Examiner notes the input image and the associated occluded images are interpreted to be the same size. “ selecting M center points from the initial associated image, and determining M target areas based on the M center points, wherein M is a natural number greater than or equal to 1, and the M center points are pixels within the target object positioning box;” (Yang, Section III, Paragraph 2, “For an input image I, RRE processes it in two steps. In the first step, the ORE is introduced to select region in the bounding boxes….Then ORE randomly selects an initial point E … initial area…and initial probability p in the image”) Examiner notes the initial point E is interpreted as a center point M and is disclosed to be within the bounding box. Additionally, the single initial point E is interpreted to meet the limitation of “a natural number greater than or equal to 1”. “and setting a pixel value of at least one pixel in the initial associated image to generate the associated image corresponding to the image sample,” (Yang, Section III, Paragraph 3, “All pixels in final erasing region will be modified to a random value in [0, 255].”) “wherein the pixel in the initial associated image contains a pixel corresponding to the M target areas and a pixel corresponding to an area other than the M target areas,” (Yang, Abstract, “Range-aware Random Erasing randomly occludes a part of foreground and a part of background rather occludes a part of foreground and a part of background”). Examiner notes the background disclosed in Yang is interpreted to be an area other than the target area identified by the image recognition model. “and a pixel value of the pixel corresponding to the M target areas is set in a different manner than a pixel value of the pixel corresponding to the area other than the M target areas.”. Yang discloses modifying the pixels in an erasing region, whether the identified foreground or background, to a random value between 0-255. The value in which the background and foreground are assigned are not the same number, which can be seen in (Yang, Figure 1 (See image above)). Figure 1 in Yang shows a grey occluded area within the bounding box and a black occlusion area outside of the bounding box. Examiner has interpreted the different colors of the occlusion to be representative of different pixel values. The proposed combination as well as the motivation for combining the Huang and Yang references presented in the rejection of claim 1, apply to claim 8 and are incorporated herein by reference. Thus, the method recited in claim 8 is met by Huang and Yang. Consider Claim 9, the combination of Huang and Yang discloses “The method according to claim 8, wherein the setting the pixel value of the at least one pixel in the initial associated image to generate the associated image corresponding to the image sample comprises: randomly setting the pixel value of the pixel corresponding to the M target areas;” (Yang, Section III, Paragraph 3, “All pixels in final erasing region will be modified to a random value in [0, 255].”) “setting the pixel value of the pixel corresponding to the area other than the M target areas in the initial associated image to 1 to obtain the associated image corresponding to the image sample;” (Yang, Section III, Paragraph 4, “all pixels in this region will be reassigned to a random value in [0, 255]. Just like the erased area in the bounding boxes, the erased area outside the bounding boxes is also limited to a certain scope.”). Examiner notes that the limitation of pixel value of the area other than the M target area equal to 1 fall within the 0-255 range set for the background occlusion disclosed in Yang. “correspondingly, the adjusting the pixel value of the at least one pixel within the target object positioning box of each image sample according to the associated image corresponding to each image sample to obtain the preprocessed image sample set comprises: for each image sample, multiplying a pixel value of each pixel of the image sample by a pixel value of a corresponding pixel of the associated image to obtain the preprocessed image sample set.” (Yang, Section IV, Equation 3). The proposed combination as well as the motivation for combining the Huang and Yang references presented in the rejection of claim 1, apply to claim 9 and are incorporated herein by reference. Thus, the method recited in claim 9 is met by Huang and Yang. Consider Claim 10, the combination of Huang and Yang discloses “The method according to claim 1, further comprising: obtaining a to-be-recognized image, wherein the to-be-recognized image contains a recognition object; and inputting the to-be-recognized image into the image recognition model to obtain a recognition result.” (Huang, Pg 2, “inputting the to-be-identified image to the feature extraction model to extract the abstract feature”). Claim 11 recites an electronic device with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “An electronic device, comprising: a processor, and a memory communicatively connected to the processor; wherein the memory stores computer-executed instructions; the processor, when executing the computer-executed instructions stored in the memory, is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 12 recites an electronic device with elements corresponding to the steps recited in Claim 2. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 12, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 13 recites an electronic device with elements corresponding to the steps recited in Claim 3. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 13, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 14 recites an electronic device with elements corresponding to the steps recited in Claim 4. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 14, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 17 recites an electronic device with elements corresponding to the steps recited in Claim 7. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 17, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 18 recites an electronic device with elements corresponding to the steps recited in Claim 8. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 18, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 19 recites an electronic device with elements corresponding to the steps recited in Claim 9. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “The electronic device according to claim 19, wherein the processor is configured to…” (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”). Claim 20 recites a non-transitory computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang and Yang references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Huang and Yang references discloses “A non-transitory computer-readable storage medium with computer-executed instructions stored therein, wherein a processor, when executing the computer-executed instructions, is configured to…” (Huang, Pg 5, “Another embodiment of the present invention provides a storage medium, the computer-readable storage medium comprises a stored computer program, wherein, when the computer program is run controlling the computer-readable storage medium device to perform the image classification method of the embodiment of the invention.”). Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Huang (CN 113269257 with the effective filing date of 05/27/2021) and Yang (“Region-aware Random Erasing” with the publishing year of 2019) and in view of Du (US 2019/0286932) and in further view of Aziz (“Exploring Deep Learning-Based Architecture, Strategies, Applications and Current Trends in Generic Object Detection: A Comprehensive Review” with the publishing year of 2020). Consider Claim 5, Huang and Yang does not teach all the limitations of Claim 5; however, in an analogous field of endeavor, Du teaches “The method according to claim 4, wherein an amount of the feature map information is N, and N is a natural number greater than or equal to 1,” (Du, [0070], “FIG. 5A illustrates the process by which the object detection system generates one or more center boxes based on the heat map 310A”) Examiner notes the center boxes are interpreted as feature map information. “and the determining the target object positioning box of the image sample according to the feature map information comprises: for each piece of the feature map information, performing a normalization processing on the feature map information” (Du, [0057], “The object detection system further utilizes the batch normalization layers in the embedding neural network”) “to obtain a heat map,” (Du, [0019], “the object detection system generates a heat map associated with the input image”) “determining a target point with a heat value greater than a preset threshold from the heat map,” (Du, [0022], “the object detection system generates the boundary box by identifying pixels in the heat map with pixel values greater than a global threshold.) “and determining an initial target object positioning box based on the target point,” (Du, [0022], “After identifying the pixels in the heat map with pixel values greater than the global threshold, the object detection system performs various transformations on the identified pixels in order to generate a fully-connected region or shape within the heat map. In at least one embodiment, the object detection system then fits a rectangle (e.g., a bounding box) to the shape or region to generate the boundary box.”) “wherein the heat map contains at least one pixel, a magnitude of a heat value of the pixel represents a probability that the pixel contains an effective feature, and the heat value of the at least one pixel is within a preset interval;” (Du, [0020], “the fully-convolutional dense tagging network places lighter pixels in a region of the heat map that corresponds to a likely location of the target object keyword in the input image.”) “ Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Du with the teachings Huang and Yang to further create a heat map to visualize the position and extraction of an object and/or feature in an image. One of ordinary skill in the art would be motivated to combine Huang and Yang in view of Du to obtain a heat map of an image and use the intensity values of the image pixels to visualize and extract an object to create a bounding box for further image analysis. The combination of Huang and Yang in view of Du does not disclose “and the heat value of the at least one pixel is within a preset interval; and determining an average value of the N initial target object positioning boxes to obtain the target object positioning box.”. However, in an analogous field of endeavor, Aziz teaches (Aziz, Section B.10, “ Corner box supports boundary box regression using heat maps produced by box corners….The object center is described using a heat-map, and the network regresses the box height and width of the box directly from these centers”). Examiner notes the box corners are derived from initial bounding boxes generated by the CornerNet model and are interpreted at initial target object position boxes. Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Huang and Yang in view of Du with the teachings of Aziz to further determine the bounds of the bounding box of an object. One of ordinary skill in the art would be motivated to combine Huang and Yang in view of Du with Aziz to use heat maps to determine object centers and their corresponding bounding boxes using fewer resources in a more diversified environment. Accordingly, the combination of Huang, Yang, Du and Aziz discloses the invention of Claim 5. Claim 15 recites an electronic device with elements corresponding to the steps recited in Claim 5. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Huang, Yang, Du and Aziz references, presented in rejection of Claim 5, apply to this claim. Finally, the combination of Huang, Yang, Du and Aziz references discloses a processor (Huang, Pg 13, “The terminal device of the embodiment includes: a processor, a memory and a computer program stored in the memory and capable of running on the processor, such as an image classification program. when the processor executes the computer program, realizing the step of each image classification method embodiment. or, when the processor executes the computer program, realizing the function of each module/unit in each device embodiment.”) Allowable Subject Matter 9. Claim 6 and 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Consider Claim 6, the combination of Huang and Yang discloses “The method according to claim 5, wherein the initial target object positioning box is a polygonal box,” (Huang, Pg 7, “the region of interest is marked by rectangular frame (target area)”) “ The following is a statement of reason for the indications of allowable subject matter: consider claim 6, none of the cited prior art, alone or in combination, provides a motivation to teach the ordered combination of “…and the determining the average value of the N initial target object positioning boxes to obtain the target object positioning box comprises: for each vertex of any one of the N initial target object positioning boxes, determining an average value of coordinates corresponding to each vertex, wherein a number of coordinates of each vertex is N; and obtaining the target object positioning box according to the average value of the coordinates corresponding to each vertex.”. The electronic device in Claim 16 includes elements corresponding to the steps recited in claim 6, and therefore, includes the above-described allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Annie Pham whose telephone number is (571)272-1673. The examiner can be normally be reached Mon-Fri 9:00a – 5:00p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNIE H PHAM/Examiner, Art Unit 2662 /Siamak Harandi/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month