Prosecution Insights
Last updated: April 18, 2026
Application No. 18/751,027

Automatically enhance user provided image to gel well with express template

Final Rejection §103
Filed
Jun 21, 2024
Examiner
HE, WEIMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
2 (Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
60%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
190 granted / 410 resolved
-15.7% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
40 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 3/19/26 has been entered and made of record. Claims 1, 8, 10-12, 17 and 19-20 are amended. Claims 1-20 are pending. Response to Arguments Applicant’s arguments with respect to claims 1, 12 and 17 have been fully considered but they are moot because the arguments do not apply to the references being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-7 and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US 2023/0419574 A1) in view of Guleryuz et al. (US 2013/0265382 A1) and Cohen-Or et al. (Color Harmonization, SIGGRAPH '06: ACM SIGGRAPH 2006 Papers, July 2006). As to Claim 1, Yamada teaches A method comprising: receiving, by a processing device, a digital image to be inserted into a digital template (Yamada discloses “The layout unit 2103 generates poster data by laying out the image obtained from the image analysis unit 212 and the text obtained from the text specification unit 202 on each template acquired from the template selection unit 2102” in [0168]); identifying, using a machine-learning model, first dominant colors of the digital image (Yamada discloses “In S905, the image analysis unit 212 performs an object recognition process and a main color extraction process on the image acquired in S904. Here, a known method can be used for the object recognition process. In the present embodiment, an object is recognized by a discriminator generated by deep learning” in [0097]). Yamada doesn’t directly teach harmonic color. The combination of Guleryuz further teaches following limitations: determining, by the processing device, a harmonic match between the first dominant colors of the digital image and one or more second dominant colors of the digital template, the harmonic match including a first harmonic color from the first dominant colors that matches a second harmonic color from the one or more second dominant colors (Yamada discloses “In the creation process, by using a color scheme pattern with colors similar to the main color of the image input by the user, the overall sense of unity of the poster can be achieved” in [0124], see also [0157, 0174]. Guleryuz further discloses “At block 312, a visually pleasing background that matches and highlights the foreground may be selected, e.g., by choosing a background from a multiplicity of backgrounds with known features or by designing a background based on the characteristics of the foreground. Visually pleasing color combinations may include harmonized, matching colors as known to those of skill in art and graphic design” in [0031]; “Process 400 may next select a harmonizing color, hp, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art … Using the above mechanisms, process 410 thus designs a visually pleasing background b(x) 414 that is complimentary to the foreground image 404” in [0042].); generating, by the processing device, a modified digital image by applying the first harmonic color as a color effect to the digital image by shifting hue values of the digital image to fit the first harmonic color; and inserting, by the processing device, the modified digital image into the digital template (Yamada discloses “The layout unit 2103 generates poster data by laying out the image obtained from the image analysis unit 212 and the text obtained from the text specification unit 202 on each template acquired from the template selection unit 2102” in [0168]. Guleryuz further discloses “Consequently, process 400 may comprise designing backgrounds having a very low complexity color modulation function that modulates a base background picture's color values to match the foreground picture's color values… Process 400 may next select a harmonizing color, hF, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art… Once process 400 computes hp, the modulation function may manipulate the base background color values by ensuring that the background dominant color is the computed harmonizing color…” in [0042]. Here, Guleryuz teaches modifying one image by applying harmonic color to achieve a harmonic color matching with another image.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Yamada with the teaching of Guleryuz so as to achieve a visually pleasing image by applying a harmonic color matching between a foreground image and background image (Guleryuz, [0031]). Yamada and Guleryuz don’t explicitly teach harmonic template. The combination of Cohen-Or further teaches following limitations: determining a harmonic match using one or more harmonic templates that define groups of colors on a color wheel that are harmonically matched; and shifting hue values of the digital image to fit the first harmonic color (Cohen-Or discloses “Figure 2 illustrates the eight harmonic types defined over the hue channel of the HSV color wheel. Each type is a distribution of hue colors that defines a harmonic template: colors with hues that fall in the gray wedges of the template are defined as harmonic according to this template. We refer to these distributions as templates, since they define the radial relationships on the color wheel, rather than specific colors (meaning that any template may be rotated by an arbitrary angle)” under section 3 Harmonic Schemes at p. 625; “The harmonization process strives to preserve the original colors of the image by shifting them towards the nearest sector of the template” under section 4 Color Harmonization at p. 626; “we can recolor the image by shifting the hues so that they reside inside the harmonic template” under 4.1 The shifting of colors at p. 627, see also Fig 2 & 3 below: PNG media_image1.png 519 721 media_image1.png Greyscale PNG media_image2.png 461 1498 media_image2.png Greyscale ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Yamada and Guleryuz with the teaching of Cohen-Or so as to explain harmonic types defined over the hue channel of the HSV color wheel and shift hues to match the template sectors (Cohen-Or, p. 625). As to Claim 2, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein identifying the first dominant colors of the digital image includes: computing, for each region of at least one region of the digital image, a color histogram for the region; and assigning, using the machine-learning model, at least one dominant color for the region using the color histogram (Yamada discloses “In S905, the image analysis unit 212 performs an object recognition process and a main color extraction process on the image acquired in S904. Here, a known method can be used for the object recognition process. In the present embodiment, an object is recognized by a discriminator generated by deep learning” in [0097]; “The graph in FIG. 23 is a histogram that schematically represents the frequency of appearance of colors in an image… In the present embodiment, the histogram is generated by dividing the RGB color space into 16 gradations for each channel, but this is by way of example and not limitation” in [0098]; “For example, to extract a second main color, the image analysis unit 212 selects a color that has the second largest local maximum in the three-dimensional (R, G, B) space of the histogram in FIG. 23” in [0105]. Guleryuz also discloses “A separate foreground dominant color may then be calculated for each region, and this foreground dominant color may be used to modulate the background image colors in that region… In one embodiment, the dominant foreground color for region i, e.g., the dominant foreground color DF,i for pi, may be calculated as follows…” in [0042].) As to Claim 4, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 2, wherein the color histogram comprises three luminance histograms indicating a brightness distribution of a red, green, or blue color channel, respectively, for each region (Yamada discloses “The graph in FIG. 23 is a histogram that schematically represents the frequency of appearance of colors in an image. In this graph, the horizontal axis represents the color of the pixel in three channels (R, G, B), and the vertical axis represents the number of pixels belonging to the range of ±8 before and after each of the R, G, and B channels” in [0098].) As to Claim 5, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein the harmonic match is determined by identifying the first harmonic color among the first dominant colors and the second harmonic color among the one or more second dominant colors nearest a sector boundary of a harmonic template that includes a radial distribution of colors within a hue-saturation-value (HSV) color wheel that are aesthetically balanced, the harmonic template including at least one harmonic sector and at least two sector boundaries (Guleryuz discloses “Process 400 may next select a harmonizing color, hF, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art … Using the above mechanisms, process 410 thus designs a visually pleasing background b(x) 414 that is complimentary to the foreground image 404… When these hints are available, the rendering end may determine the virtual background colors in a way that ensures there is no substantial deviation from the sent color hints, e.g., by selecting a virtual background color which is analogous or split-analogous in color scheme to the actual background, e.g., by being adjacent on a color wheel. This may help in further avoiding artifacts” in [0042]. Cohen-Or also discloses PNG media_image3.png 486 728 media_image3.png Greyscale at p. 626; see also Fig 2-4.) As to Claim 6, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 5, wherein determining the harmonic match between the first dominant colors and the one or more second dominant colors includes: determining, for each first dominant color in the digital image and for each harmonic template, a respective first minimum distance between the first dominant color and the sector boundaries of the harmonic template; determining, for each second dominant color in the digital template and for each harmonic template, a respective second minimum distance between the second dominant color and the sector boundaries of the harmonic template; determining the second harmonic color among the one or more second dominant color of the digital template as a second dominant color closest to any sector boundary in the harmonic template, a first harmonic template being a template in which the second harmonic color is closest to a sector boundary; and determining the first harmonic color among the first dominant colors of the digital image as a first dominant color closest to any sector boundary in the first harmonic template (Guleryuz discloses “Process 400 may next select a harmonizing color, hp, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art … Using the above mechanisms, process 410 thus designs a visually pleasing background b(x) 414 that is complimentary to the foreground image 404” in [0042]; “At block 312, a visually pleasing background that matches and highlights the foreground may be selected, e.g., by choosing a background from a multiplicity of backgrounds with known features or by designing a background based on the characteristics of the foreground. Visually pleasing color combinations may include harmonized, matching colors as known to those of skill in art and graphic design” in [0031]. Cohen-Or, Fig 2-4.) As to Claim 7, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein applying the first harmonic color as the color effect includes blending the first harmonic color on the digital image to generate a blended image (Yamada discloses “More specifically, in the present embodiment, a plurality of variations of poster candidates can be created according to the target impression by selecting and combining elements that make up the posters, such as skeletons, color scheme patterns, and fonts, based on the target impression. In the creation process, by using a color scheme pattern with colors similar to the main color of the image input by the user, the overall sense of unity of the poster can be achieved” in [0124]; “In this process, the combination generation unit 1701 uses the color scheme pattern table such that only color scheme patterns that match the main color extracted by the image analysis unit 212 are used in generating the combinations… As a result, the colors assigned to the poster are the same as the colors contained in the image” in [0157]. Guleryuz further discloses “At block 312, a visually pleasing background that matches and highlights the foreground may be selected, e.g., by choosing a background from a multiplicity of backgrounds with known features or by designing a background based on the characteristics of the foreground. Visually pleasing color combinations may include harmonized, matching colors as known to those of skill in art and graphic design” in [0031].) As to Claim 9, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein: the digital image includes multiple digital images; and generating the modified digital image includes generating multiple modified digital images by applying a respective first harmonic color as a respective color effect to each digital image of the multiple digital images (Yamada discloses “a template including information about shapes and positions of images, text, and graphics that make up the poster, and automatically arranging the images, text, and graphics according to the template” in [0002]; “In this process, the combination generation unit 1701 uses the color scheme pattern table such that only color scheme patterns that match the main color extracted by the image analysis unit 212 are used in generating the combinations… As a result, the colors assigned to the poster are the same as the colors contained in the image” in [0157]. Guleryuz also discloses “Process 400 may next select a harmonizing color, hp, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art … Using the above mechanisms, process 410 thus designs a visually pleasing background b(x) 414 that is complimentary to the foreground image 404… In another embodiment, the background image pixels may be divided into K regions, p, i= 1, ... , K. A separate foreground dominant color may then be calculated for each region, and this foreground dominant color may be used to modulate the background image colors in that region. Regions may be individual background pixels or may be groups of pixels forming irregular or regular shapes, e.g., squares, triangles, ovals, etc. Regions may also be determined by applying object-based region decomposition algorithms on base background images. In one embodiment, the dominant foreground color for region i, e.g., the dominant foreground color DF,i for pi, may be calculated as follows” in [0042].) As to Claim 10, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein the method further includes: in response to receiving visual edits to the digital template, determining an updated harmonic match between the first dominant colors of the digital image and one or more updated second dominant colors of the digital template; and generating an updated modified digital image in real-time as the digital template is modified by applying an updated first harmonic color from the updated harmonic match as the color effect to the digital image (Yamada discloses “In S1802, the template selection unit 2102 selects one or more templates that use a color similar to the main color acquired from the image analysis unit 212 from the templates acquired from the template acquisition unit 2101. More specifically, the color difference between the main color and a color set in a graphic object in each template is calculated, and only templates including a graphic object having a color difference smaller than or equal to a threshold value are selected” in [0171]; “Although not shown in the figure, the poster creation application may have a function of editing the created poster after the creation result is displayed on the poster display unit 205 such that the layout, the colors, and, the shapes, etc. of the images, the texts (characters), and the graphics (figures, illustrations, photographs, etc.) are edited according to a user operation so as to achieve a design desired by the user.” in [0066]. Guleryuz further discloses “Process 400 may next select a harmonizing color, hp, that highlights the computed dominant color, DF. A variety of templates may be used to select the harmonizing color, as known in the relevant art … Using the above mechanisms, process 410 thus designs a visually pleasing background b(x) 414 that is complimentary to the foreground image 404” in [0042], see also [0031,0046]. Cohen-Or, Fig 2-4.) As to Claim 11, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 1, wherein: the modified digital image is automatically generated upon insertion into the digital template (Yamada discloses “FIG. 2 shows a software block diagram related, among various units, the poster creation unit 210 that provides the automatic poster creation function” in [0047]; “In this process, the combination generation unit 1701 uses the color scheme pattern table such that only color scheme patterns that match the main color extracted by the image analysis unit 212 are used in generating the combinations… As a result, the colors assigned to the poster are the same as the colors contained in the image” in [0157].); and applying the first harmonic color as the color effect further includes adjusting the hue values of the digital image based on spatial coherence among colors of neighboring pixels (Cohen-Or discloses “Given a color image, our method finds the best harmonic scheme for the image colors. It then allows a graceful shifting of hue values so as to fit the harmonic scheme while considering spatial coherence among colors of neighboring pixels using an optimization technique” at Abstract; see also E2(V) at p. 627.) Claim 12 recites similar limitations as claim 1 besides image segmentation (Guleryuz, [0026, 0042]), but in a system form. Therefore, the same rationale used for claim 1 is applied. Claim 13 is rejected based upon similar rationale as Claim 5. Claim 14 is rejected based upon similar rationale as Claim 6. As to Claim 15, Yamada in view of Guleryuz and Cohen-Or teaches The system of claim 12, wherein the processing device is configured to perform image segmentation on the digital image by: applying a median filter to the digital image to create a gray image; and segmenting, using the machine-learning model, the gray image into the multiple segmented regions by assigning a segment label to each pixel in the gray image, wherein pixels with a same segment label share one or more visual characteristics (Guleryuz discloses applying a moving average filters in [0038]; “In some embodiments, gray-scale backgrounds formed with the aid of sampled directional textures that accomplish crosshatching patterns may be used, e.g., by reusing the directional texture LUT to realize the textures and blending… In another embodiment, the background image pixels may be divided into K regions, p, i= 1, ... , K. A separate foreground dominant color may then be calculated for each region, and this foreground dominant color may be used to modulate the background image colors in that region… Regions may also be determined by applying object-based region decomposition algorithms on base background images” in [0042].) As to Claim 16, Yamada in view of Guleryuz and Cohen-Or teaches The system of claim 15, wherein: the median filter includes parameters to remove noise from the digital image; and the machine-learning model comprises a convolutional neural network (Yamada discloses “for example, a deep learning method using the convolution neural network (CNN) or a machine learning method using a decision tree, or the like. In the present embodiment, the impression learning unit performs supervised deep learning using the CNN with the poster image as input and the four factors as output” in [0087]. Guleryuz discloses a moving average filters to remove segmentation error or noisy segmentation boundary in [0034, 0038].) Claim 17 recites similar limitations as claims 1 & 10 but in a computer readable medium form. Therefore, the same rationale used for claims 1 & 10 is applied. Claim 18 is rejected based upon similar rationale as Claim 9. Claim 19 is rejected based upon similar rationale as Claim 10. Claim 20 is rejected based upon similar rationale as Claim 11. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yamada in view of Guleryuz, Cohen-Or and Schaumberg (US 2025/0086994 A1). As to Claim 3, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 2, wherein the method further comprises assigning, using k-nearest neighbors (KNN) classification by the machine-learning model, the region to the at least one dominant color by classifying each pixel of multiple pixels in the region to a particular color based on a most common classification of k nearest neighbor pixels, k being a positive integer value (Schaumberg discloses “The systems's KNN uses this instant lookup to quickly infer "suspect" pixels to pen, tissue, background, etc. types in imask” in [0057]; “This evidence of unsaturated pen color is important for "accelerated scalable K-nearest-neighbor (KNN)" search, because anywhere in the image that identical unsaturated pen pixels are found, the KNN algorithm will then infer those pixels as pen” in [0096].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Yamada, Guleryuz and Cohen-Or with the teaching of Schaumberg so as to use KNN quickly to infer suspect pixels as background (the white/clear/empty part of a slide), tissue, pen, or marker (Schaumberg, [0096]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yamada in view of Guleryuz and Cohen-Or, further in view of Xiang et al. (CN 118134816 A) and Ivanchenko et al. (US 2014/0187223 A1). As to Claim 8, Yamada in view of Guleryuz and Cohen-Or teaches The method of claim 7, wherein applying the first harmonic color as the color effect further includes: computing image gradients to detect edges in the blended image; and performing histogram stretching on pixels corresponding to the edges in the blended image to generate the modified digital image, wherein the histogram stretching sharpens the modified digital image (Guleryuz discloses “The backgrounds may include color gradients and/or color palates suitable to color-match the foregrounds” in [0046]. Xiang further discloses “Histogram stretching or linear stretching can enhance edges and textures in underwater images… By applying an edge detection algorithm, an edge image of an underwater image is obtained. Then, the average gradient value in the edge image is calculated. The larger this value is, the higher the edge sharpness and the better the clarity of the underwater image” in [0044].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Yamada and Guleryuz with the teaching of Xiang so as to use histogram stretching to enhance edges and textures in the image (Xiang, [0044]). The combination of Ivanchenko further teaches following limitations: wherein the edges are identified as local intensity changes above a threshold value defined by sets of connected pixels forming boundaries between disjoint intensity regions (Ivanchenko discloses “A set of gradients of the image segment can be analyzed, where the gradient represents an amount of change in color value between pixels 606 of the segment. In this example, there would be a set of large gradient values near the transition between the regions… In at least some embodiments, a gradient threshold can be set to determine how much change is necessary to designate the segment as a text candidate” in [0028]; “The image intensity gradients of a plurality of regions in the image are analyzed to identify a set of edge locations 804. Taking an edge to be a change in intensity taking place over a number of pixels, the edge detection algorithm can determine the edge by calculating a derivative of this intensity change, for example, and selecting regions where the calculated value meets or exceeds an edge selection threshold. Pixel values adjacent, or within a determined distance from, each of the set of edge locations are analyzed 806” in [0034].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Yamada, Guleryuz and Xiang with the teaching of Ivanchenko so as to analyze a set of gradients of the image segment to represent an amount of color change at the edge region (Ivanchenko, [0028]). Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached Monday-Friday, 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Weiming He/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §103
Feb 24, 2026
Interview Requested
Mar 04, 2026
Applicant Interview (Telephonic)
Mar 04, 2026
Examiner Interview Summary
Mar 19, 2026
Response Filed
Apr 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567135
MULTIMEDIA PLAYBACK MONITORING SYSTEM AND METHOD, AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561876
System and method for an audio-visual avatar creation
2y 5m to grant Granted Feb 24, 2026
Patent 12514672
System, Method And Software Program For Aiding In Positioning Of Objects In A Surgical Environment
2y 5m to grant Granted Jan 06, 2026
Patent 12494003
AUTOMATIC LAYER FLATTENING WITH REAL-TIME VISUAL DEPICTION
2y 5m to grant Granted Dec 09, 2025
Patent 12468949
SYSTEMS AND METHODS FOR FEW-SHOT TRANSFER LEARNING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
60%
With Interview (+13.8%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month