Prosecution Insights
Last updated: April 19, 2026
Application No. 18/759,992

METHODS AND APPARATUS FOR FRAME DENOISING

Non-Final OA §103§112
Filed
Jun 30, 2024
Examiner
HANSELL JR., RICHARD A
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Gopro Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
368 granted / 487 resolved
+17.6% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. The communication is in response to the application received 06/30/2024, wherein claims 1-20 are pending and are examined as follows. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification 3. The disclosure is objected to because of the following informalities: ¶0081, line 6 recites “…is sharper particular around the people”. It is believed this should read “…is sharper particularly around the people” Appropriate correction is required. Claim Rejections - 35 USC § 112 4. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 14-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 14; claim 14 recites the limitation “select second portions of the synthetic frames for inclusion in double composite frames” (emphasis added); however, the limitation “double composite frames” does not appear to be discussed in the specification. As such, it is not entirely clear what is meant by “double composite frames” as claimed. The specification however does refer to a “double-denoised frame”, and for the purposes of examination, the examiner interprets the aforementioned limitation as such. The Examiner respectfully requests the Applicant to point out where in the specification support can be found for the limitation above. If no such support can be identified, Applicant is required to cancel the new matter in the reply to this Office Action. Regarding claim 15, claim 15 depends on claim 14 above, and therefore includes all of its features. For this reason, claim 15 is also rejected under 35 U.S.C. 112(a). Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 8-13, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jin et al.US 9,311,690 B2, in further view of Izadi et al. US 2023/0119747 A1, hereinafter referred to as Jin and Izadi, respectively. Regarding claim 1, Given the broadest reasonable interpretation of the following limitations, Jin teaches and/or suggests “A method of denoising a video, comprising: estimating optical flow in the video [Refer to fig. 2 regarding motion field 202 generated by the optical flow of the disclosed video denoising algorithm. Also see element 308 in fig. 3 for computing optical flow in received image frames of video content]; generating synthetic frame corresponding to a frame of the video based on a neighboring frame of the frame and the optical flow [Although a ‘synthetic frame’ is not explicitly disclosed, warping can be construed as a means for generating a synthetic frame. See filed specification (e.g. ¶0027, ¶0036). As such, see col 8 lines 39-48 (with reference to fig. 3) regarding warping neighboring image frames of a selected reference frame based on optical flow computations]; masking the synthetic frame generating a masked synthetic frame [Although the term ‘mask’ is not used, Jin describes comparing a ‘threshold’ value to computed distance(s) between similar motion patches in the neighboring frames and the image patch in the reference frame to facilitate subsequent denoising. See figs. 3-4. Also reference col. 6 lines 5-21]; generating a composite frame based on the masked synthetic frame and the frame [The image patch in the reference frame can be denoised based on averaging the matching patches from within the reference frame with those in the neighboring image frames (fig. 4). Said averaging generates a composite frame]; and encoding the composite frame into a denoised video.” [Although Jin does not explicitly refer to encoding denoised image frames, doing so would be within the level of skill in the art. See for e.g. the work of Izadi below.] Although Jin’s method for video denoising a sequence of image frames are deemed relevant, they do not explicitly address encoding the denoised frames. Even though this would be considered within the level of skill in the art in order to facilitate storing and/or transmitting said frames, the work of Izadi from the same or similar field of endeavor is brought in to teach/suggest the foregoing feature. [See for e.g. ¶0089 regarding an encoded denoised image either as a single image or as part of a sequence of images] Recognizing Izadi’s teachings for encoding a denoised image(s), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work of Jin for employing optical flow to denoise video (e.g. abstract), to add the teachings of Izadi as above for denoising image data before coding so that not only the efficiency and accuracy of coding can be improved but also the visual quality of the image data itself (e.g. ¶0022). Regarding claim 2, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Given the BRI of the following limitations, Jin further teaches and/or suggests “where masking the synthetic frame [See claim 1 above] comprises: calculating differences between pixel values of the synthetic frame and the frame [See col. 1 lines 53-65 with respect to performing the absolute value of pixel-wise differences between the reference frame and the previous and subsequent image frames (which have been warped) for computing a noise estimate (fig. 3)]; and generating a mask based on the differences.” [Based on these determined differences, a threshold is determined (col. 6 lines 5-33 and fig. 4), which can be construed as a mask] Regarding claim 3, Jin and Izadi teach and/or suggest all the limitations of claim 2, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where generating the mask comprises comparing the differences to a difference threshold.” [See for e.g. col. 6 lines 5-33 regarding comparing to a threshold. Also refer to fig. 4] Regarding claim 4, Jin and Izadi teach and/or suggest all the limitations of claim 3, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “further comprising selecting the difference threshold based on a temporal distance between a first neighboring frame and the frame.” [The threshold is used to compare the distance between an image patch in the reference frame and a similar patch in an image frame that is located in a temporal window around said reference frame (see 134 in fig. 1); hence, this is based on a temporal distance between said frames. See col. 6 lines 5-21] Regarding claim 5, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where masking the synthetic frame comprises: calculating differences between a luminance component of pixels of the synthetic frame and the frame [Col. 1 lines 53-65 determines an absolute value of pixel-wise differences between image frames within a temporal window as per fig. 1]; and generating a mask based on the differences.” [The foregoing determines a noise estimate for the neighboring frames, from which a median noise level of the reference frame can be determined. According to col. 6 lines 5-33, a threshold can be established] Regarding claim 8, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where generating the synthetic frame comprises warping the neighboring frame based on the optical flow.” [Fig. 3 describes warping one or more of the neighboring frames within a temporal window to position said frames with the reference frame based on optical flow computations] Regarding claim 9, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “further comprising: adding the masked synthetic frame to an accumulator; and adding a mask used in masking the synthetic frame to an inclusion counter.” [Jin does not explicitly refer to an “accumulator” and an inclusion counter” as claimed, however according to ¶0184 of the filed specification, Jin’s averaging scheme as discussed in col. 7 appears to suggest these features, where said accumulator and inclusion counter enable the composite (e.g. averaged) frame to be generated] Regarding claim 10, Jin and Izadi teach and/or suggest all the limitations of claim 9, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where generating the composite frame is based on dividing the accumulator by the inclusion counter.” [Same as above in claim 9, where by Jin’s averaging scheme, an average of the matching patches can be determined. Averaging requires dividing by the total count] Regarding claim 11, claim 13 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. As to the claimed hardware and software, please refer to col. 7 lines 49-60, col. 11 lines 56-67, and col. 12 lines 1-32 of Jin for support. The limitation “select portions of the synthetic frames…” is understood as corresponding to “masking the synthetic frame” in claim 1. Regarding claim 12, Jin and Izadi teach and/or suggest all the limitations of claim 11, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where the set of instructions further causes the processor to: determine a number of iterations, where generating the synthetic frames is based on the number of iterations.” [Jin’s denoising scheme (e.g. fig. 4) averages a number of matching patches, which are construed as the number of iterations] Regarding claim 13, claim 13 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. Please refer to Izadi. Regarding claim 16, claim 16 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. As to “performing a selective averaging of portions of the plurality of synthetic frames and the plurality of frames generating a plurality of composite frames”, please refer to Jin’s averaging scheme as noted in fig. 4 and as further described in col. 7. If one composite frame can be generated, then it is within the level of skill in the art to generate a plurality of composite frames. Lastly, regarding “compiling a denoised video based on the plurality of composite frames.”, please see the work of Izadi as shown in claim 1. Regarding claim 20, Jin and Izadi teach and/or suggest all the limitations of claim 16, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “Jin further teaches and/or suggests “where performing the selective averaging of portions of the plurality of synthetic frames and the plurality of frames comprises: selecting a first synthetic frame of the plurality of synthetic frames that mimic a first frame of the plurality of frames [Col 7 lines 15-40 describe averaging matching patches between frames in a temporal sequence. This is also shown in fig. 4 (410)]; calculating a difference between a first luminance component of the first synthetic frame and a second luminance component of the first frame [Noise estimates are computed based on an absolute value of pixel-wise differences between frames (e.g. col. 1 lines 53-65)]; generating a mask by comparing the difference with a threshold [Although the term ‘mask’ is not used, Jin describes comparing a ‘threshold’ value to computed distance(s) between similar motion patches in the neighboring frames and the image patch in the reference frame to facilitate subsequent denoising. See figs. 3-4. Also reference col. 6 lines 5-21]; applying the mask to the first synthetic frame generating a masked synthetic frame [Same as above with respect to the threshold]; adding the first synthetic frame and the first frame to an accumulator [Jin does not explicitly refer to an “accumulator” as claimed, however according to ¶0184 of the filed specification, Jin’s averaging scheme as discussed in col. 7 appears to suggest these features, where said accumulator enables a composite (e.g. averaged) frame to be generated]; adding the mask to an inclusion counter [Jin does not explicitly refer to the foregoing, however, according to ¶0184 of the filed specification, Jin’s averaging scheme as discussed in col. 7 appears to suggest these features that would enable the composite (e.g. averaged) frame to be generated as claimed below]; and generating a composite frame based on the accumulator and the inclusion counter.” [Same as above. By Jin’s averaging scheme, an average of the matching patches can be determined, where averaging requires dividing by the total count] Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Jin, in view of Izadi, and in further view of Emmett et al. US 2014/0363058 A1, hereinafter referred to as Emmett. Regarding claim 6, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Jin and Izadi however do not appear to address the features of claim 6. For this reason, the work of Emmett from the same or similar field of endeavor is brought in to teach and/or suggest “where masking the synthetic frame comprises: determining edges of the frame; and generating a mask based on the edges.” [See for e.g. ¶0056-¶0058 regarding generating a mask based on the detected edges of an aligned image, where said aligned image can be construed to be a synthetic image (see for e.g. ¶0036 of the filed specification)] Although Emmett’s teachings do not explicitly refer to denoising image/video data, unlike in Jin and Izadi, they are deemed relevant since they disclose various types of image processing, including the use of a mask (e.g. 509 in fig. 5). Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Emmett’s image processing methods with the noise removal techniques of Jin and Izadi in order to uniquely identify an individual from biometric image data of an authentication device (e.g. abstract). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Jin, in view of Izadi, and in further view of Rego et al. US 2025/0245796 A1, hereinafter referred to as Rego. Regarding claim 7, Jin and Izadi teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Jin and Izadi however do not appear to address the features of claim 7. For this reason, the work of Rego from the same or similar field of endeavor is brought in to teach and/or suggest “where masking the synthetic frame comprises: estimating areas of occluded motion based on the optical flow; and generating a mask based on the areas of occluded motion.” [See for e.g. ¶0096-¶0097 of Rego regarding occlusion mask determination which can identify one or more occluded regions in synthetic view images] Recognizing the teachings of Rego, it would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jin and Izadi related to noise reduction, to add the generated synthetic ground truth datasets of Rego as above, to facilitate training image registration models (e.g. ¶0038). Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Jin, in view of Izadi, and in further view of Douady-Pleven et al. US 2020/0258200 A1, hereinafter referred to as Douady-Pleven. Regarding claim 14, Jin and Izadi teach and/or suggest all the limitations of claim 11, and are analyzed as previously discussed with respect to that claim. However Jin and Izadi do not appear to address the features of claim 14. Douady-Pleven on the other hand from the same or similar field of endeavor, is brought in to teach and/or suggest “where the set of instructions further causes the processor to: select second portions of the synthetic frames for inclusion in double composite frames; and generate the double composite frames based on the second portions of the synthetic frames and the composite frames.” [The limitation “double composite frames” could not be found in the specification as recited. However, the specification does refer to a ‘double-denoised frame’. Thus, the examiner interprets the aforementioned limitation as such. Accordingly, see for e.g. figs. 8-9 with respect to generating a second denoised image. Also please refer to e.g. ¶0123-¶0124 and ¶0153 for additional support] Recognizing the teachings of Douady-Pleven, it would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jin and Izadi related to noise reduction, to add the double non-local means denoising approach of Douady-Pleven as above, where the repeated application of a set of weights may substantially improve the overall quality of the denoising algorithm with only having a marginal impact on the overall complexity (e.g. ¶0026). Regarding claim 15, Jin, Izadi, and Douady-Pleven teach and/or suggest all the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Although Jin does not address encoding, Izadi from the same or similar field of endeavor does teach and/or suggest this feature. [See for e.g. ¶0089 regarding an encoded denoised image either as a single image or as part of a sequence of images Please note, this also finds support in Douady-Pleven below]. The motivation for combining Jin and Izadi has been discussed in connection with claim 1, above. However, it appears Izadi’s teachings do not refer to a double denoised frame. As such, Douady-Pleven from the same or similar field of endeavor is brought in to teach and/or suggest “where the set of instructions further causes the processor to encode the double composite frames into a denoised video.” [In the context of the double denoised frames of Douady-Pleven (claim 14), see fig. 7 and ¶0158-¶0159, where a reduced noise image can be passed to an encoding operation] The motivation for combining Jin, Izadi, and Douady-Pleven has been discussed in connection with claim 14, above. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Jin, in view of Izadi, and in further view of Qiu WO 2024/131035 A1, hereinafter referred to as Qiu. Regarding claim 17, Jin and Izadi teach and/or suggest all the limitations of claim 16, and are analyzed as previously discussed with respect to that claim. However, both Jin and Izadi do not appear to address ‘scaled frames of the video’ as claimed. Qiu on the other hand from the same or similar field of endeavor is brought in to teach and/or suggest “further comprising generating scaled frames of the video, where generating the plurality of optical flow files is based on the scaled frames of the video.” [First and second video frames can be scaled to reduce the size of the frames which improves the analysis speed during subsequent optical flow analysis. See 8th paragraph on pg. 7] Recognizing the teachings of Qiu, it would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jin and Izadi related to noise reduction, to add the video frame interpolation method of Qiu as above, which enables the accuracy of generating a composite frame on the basis of two consecutive video frames to be improved, thereby improving frame interpolation quality and the frame interpolation effect (e.g. abstract). Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jin, in view of Izadi, and in further view of Ambasamudram et al. WO 2018/116322 A1, hereinafter referred to as Ambasamudram. Regarding claim 18, Jin and Izadi teach and/or suggest all the limitations of claim 16, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where generating the plurality of synthetic frames comprises: warping a first frame of the plurality of frames generating a first synthetic frame based on a first optical flow file of the plurality of optical flow files [See fig. 4 with regards to warping a frame to generate a synthetic file based on optical flow computations]; and warping the first synthetic frame generating a second synthetic frame based on a second optical flow file of the plurality of optical flow files.” [Jin (and Izadi) however do not address the foregoing feature. See Ambasamudram below for support] Since both Jin and Izadi do not address the foregoing, Ambasamudram from the same or similar field of endeavor is brought in to teach and/or suggest “and warping the first synthetic frame generating a second synthetic frame based on a second optical flow file of the plurality of optical flow files.” [The above limitation is taken to be analogous to re-warping a frame. As such, see fig. 1 with respect to re-warping a frame using net displacement of an object to create re-warped frames. Said displacement is construed to be based on optical flow (e.g. ¶0056)] Recognizing the teachings of Ambasamudram, it would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jin and Izadi related to noise reduction, to add the image processing methods of Ambasamudram as above for automatically generating a pan shot from a video of a dynamic object (e.g. abstract). Regarding claim 19, Jin, Izadi, and Ambasamudram teach and/or suggest all the limitations of claim 18, and are analyzed as previously discussed with respect to that claim. Jin further teaches and/or suggests “where generating the plurality of composite frames comprises generating a first composite frame based on the first synthetic frame and a second frame of the plurality of frames, the second frame temporally adjacent to the first frame.” [The image patch in the reference frame can be denoised based on averaging the matching patches from within the reference frame with those in the neighboring image frames (fig. 4). Said averaging generates a composite frame] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see PTO 892 for additional references. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jun 30, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604042
LAYER INFORMATION SIGNALING-BASED IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604096
ADAPTIVE BORESCOPE INSPECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12587660
METHOD FOR DECODING IMAGE ON BASIS OF IMAGE INFORMATION INCLUDING OLS DPB PARAMETER INDEX, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12587667
SYSTEMS AND METHODS FOR SIGNALING TEXT DESCRIPTION INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579871
CAMERA DETECTION OF OBJECT MOVEMENT WITH CO-OCCURRENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+28.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month