Prosecution Insights
Last updated: April 19, 2026
Application No. 18/463,460

DEVICE AND COMPUTER IMPLEMENTED METHOD FOR DETERMINING A CLASS OF AN ELEMENT OF AN IMAGE IN PARTICULAR FOR OPERATING A TECHNICAL SYSTEM

Final Rejection §103
Filed
Sep 08, 2023
Examiner
YANG, JIANXUN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
472 granted / 635 resolved
+12.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
45 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-3, 5-8 and 10-15 are pending. Claims 4 and 9 are canceled. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claim(s) 1-3, 5-8 and 10-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al (Learning Semantic Context, 2021) in view of Tatsumi et al (US2022/0383616) and further in view of Konrardy et al (US11119477). Regarding claims 1 and 10, Yan teaches a computer-implemented method for a technical system, the method comprising the following steps: applying a plurality of masks to an image, the plurality of masks masking different regions of the image than one another, wherein the application of the plurality of masks generates a plurality of respective masked images; (Yan, Figs. 3-4; “generate multi-scale striped masks to remove a part of regions from the normal samples”, [abstract]; “adopt the multi-scale striped masks to indicate the removed regions of the input images”, [Fig. 2, caption]; “various training examples are generated to obtain the rich semantic context”, [abstract]; “In testing, we adopt multiple masks with different locations and scales to remove different parts of regions from the input images, and merge the multiple outputs”, p3113:c1; applying multiple masks to an image to create incomplete images) for each of the plurality of masked images respectively: (a) reconstructing a respective masked-out region of the respective masked image based on a context provided by an unmasked region of the respective masked image, thereby generating a respective reconstructed image; and (Yan, Figs. 1(c) and 2; “train a generative adversarial network to reconstruct the unseen regions”, [abstract]; “learn surrounding semantic features to complete this region”, p3111:c1; “Our method in (c) is able to recover the anomaly region with normal patterns by exploring the semantic context in surroundings”, [Fig. 1, caption]; “use the trained generator to generate multiple complete images”, p3112:c1; reconstructing the masked regions using a GAN based on the unmasked context) Yan does not expressly disclose but Tatsumi teaches: (b) performing a classification on the respective reconstructed image that assigns to each pixel of the respective reconstructed image a respective one of a plurality of classifications, each of the classifications corresponding to a respective one of a plurality of object types, wherein, for each of the classified pixels, the classification of the respective pixel identifies the respective pixel as being part of a depiction of an object of the object type to which its respective classification corresponds; (Tatsumi, Fig. 4; “the inference unit 103 performs a semantic segmentation task of classifying each pixel on each masked image into three classes of “fish class”, “background class”, and “dog class””, [0050]; “the inference unit 103 that performs inference using a learned model by machine learning for each of the plurality of masked images to acquire an inference result regarding classification of the image for each of the plurality of masked images”, [0063]; performing semantic classification on images to assign object types to pixels) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to apply the pixel-wise classification of Tatsumi to the high-quality reconstructed images of Yan in order to identify the objects therein, rather than just detecting anomalies, or to improve Tatsumi's classification by first reconstructing the occluded parts as taught by Yan. The combination of Yan and Tatsumi also teaches other enhanced capabilities. The combination of Yan and Tatsumi further teaches: generating a classification map comprising a plurality of pixel positions, wherein each of the reconstructed images includes a respective pixel for each of the pixel positions of the classification map, and wherein the classification map assign each of the pixel positions of the classification map a respective one of the plurality of classifications based on an aggregate of all of the classifications of the pixels of the reconstructed images that correspond to the respective pixel position of the classification map; and (Yan, Fig. 4; “merge the multiple outputs ... to compute the final error map”, p3113:c1; “adopt the maximum value at each position”, or “mean value”, eqs. (7) and (10), p3113:c1-c2; aggregating pixel values (errors) from multiple reconstructed passes; Tatsumi, Fig. 5; “when all the synthesis target masks are superimposed, the ratio of the number of superimpositions of the unprocessed portions (non-mask portions) to the total number is obtained to calculate a basis rate for each region. Then, the basis map is generated by visualizing the obtained basis rate of each region”, [0057]; it would be obvious to aggregate the pixel-wise classifications (from Tatsumi applied to Yan 's output) using the aggregation/merging strategies taught by Yan (mean/max) or Tatsumi (superimposition) to generate a robust final classification map) The combination of Yan and Tatsumi does not expressly disclose but Konrardy teaches: controlling the technical system to perform an operation that is selected based on a characterization of an environment surrounding the technical system as including one or more objects, the characterization being based on the classification map. (Konrardy, “corrective actions to mitigate the impact of such anomalies may be taken. Corrective actions may include maneuvering the vehicle in the area of the anomaly or rerouting the vehicle around the area of the anomaly”, [abstract]; “controlling the vehicle to perform one or more of the following control actions to avoid the anomaly: reducing speed, stopping, turning, swerving”, c4:45-50; “The machine learning techniques may include training a model using data having known characteristics relating to a plurality of anomalous conditions within various operating environments”; c46:1-5; identifying an anomaly/object; controlling a vehicle (technical system) based on characterized objects/anomalies in the environment) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the autonomous vehicle control teachings of Konrardy (e.g., taking corrective actions like slowing down or swerving based on detected anomalies) into the image processing system formed by Yan and Tatsumi in order to utilize the robust, high-fidelity classification map generated by the combination which accurately reconstructs occluded regions and classifies objects to physically control a technical system (such as an autonomous vehicle) to safely navigate around identified obstacles or hazards in the environment. The combination of Yan and Tatsumi and Konrardy also teaches other enhanced capabilities. Regarding claim 2, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, wherein each of the pixel positions of the classification map is assigned the respective one of the plurality of classifications based on which classification among the classifications of the pixels of the reconstructed images that correspond to the respective pixel position occurs more frequently than at least one other of the classification of the corresponding pixels. (Tatsumi, Fig. 5; “the ratio of the number of superimpositions of the unprocessed portions (non-mask portions) to the total number is obtained to calculate a basis rate for each region. Then, the basis map is generated by visualizing the obtained basis rate of each region”, [0057]; “basis rate in this region 503 b is calculated as 2/2=100%”, [0059]; calculating a frequency ("basis rate" or "ratio") of a classification occurring across the multiple masked images. Assigning the classification based on this rate (e.g., selecting the class with 100% frequency over one with 0%) teaches assigning based on which occurs more frequently; Yan, “merge the multiple outputs ... to compute the final error map”, p3113:c1; “adopt the maximum value at each position”, or “mean value”, eqs. (7) and (10), p3113:c1-c2; aggregating multiple outputs to determine a final value/assignment) Regarding claim 3, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, wherein each of the pixel positions of the classification map is assigned the respective one of the plurality of classifications based on which classification among the classifications of the pixels of the reconstructed images that correspond to the respective pixel position occurs most frequently among all of the classifications of the corresponding pixels. (Tatsumi, Fig. 5; “the ratio of the number of superimpositions of the unprocessed portions (non-mask portions) to the total number is obtained to calculate a basis rate for each region. Then, the basis map is generated by visualizing the obtained basis rate of each region”, [0057]; “the basis coefficient in this region 603 b is calculated as (1×0.9+1×0.8)/2=85%”, [0079]; calculating the frequency/rate of occurrence. Assigning the classification associated with the highest calculated rate (e.g., 100% or 85%) constitutes assigning the one that occurs "most frequently") Regarding claim 5, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, further comprising determining an indication of whether the classifications of pixels of the plurality of reconstructed images that correspond to a same pixel position of the classification map assign the same object type or different object types to the respective pixel position. (Tatsumi, “In the regions 503 c and 503 d, one processed portion and the other unprocessed portion of the masks 501 and 502 are superimposed, and the basis rate in the regions 503 c and 503 d is calculated as 1/2=50%”, [0059]; “For each masked image in which the class extracted by the inference result extraction unit 104 matches the target class”, [0064]; determining if the classification in each masked image matches the target (is the same) or does not (is different); the calculated ratio (e.g., 50% vs 100%) is an explicit indication of whether the object types assigned across the images were the same (100%) or mixed/different (50%)) Regarding claim 6, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 5, further comprising generating a consistency map that, for each of the pixel positions of the classification map, includes the indication of whether the classifications of the pixels of the reconstructed images that correspond to the respective pixel position assign the same object type or different object types. (Tatsumi, Fig. 5; “a basis generation unit configured to generate a basis map visualizing a determination basis”, [0007]; “basis map is generated by visualizing the obtained basis rate of each region”, [0057]; the "basis map" visualizes the "basis rate" (the indication derived in Claim 5), which indicates the consistency (same or different types) of the classification across the masks) Regarding claim 7, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, further comprising outputting an alarm and/or controlling the technical system in response to determining that at least one classification of one or more of the reconstructed images differs from a classification of another of the reconstructed images for a corresponding pixel position. (Yan, “infer the abnormal samples based on the error maps”, p3116:c2; “obtain an error map by computing the difference between the reconstructed image and the input image”, [abstract]; detecting anomalies based on differences (inconsistency); Tatsumi, “the basis rate in the regions 503 c and 503 d is calculated as 1/2=50%”, [0059]; detecting inconsistency between classifications; Konrardy, “communicating an alert to other vehicles”, c48:10-15; “controlling the vehicle to perform one or more of the following control actions to avoid the anomaly: reducing speed, stopping, turning, swerving”, c4:45-50; “The alert may include an indication of the type of the anomaly”, c4:55-60; outputting alarms or controlling the system when an anomaly (which corresponds to the detected difference/inconsistency in D1/D2) is determined) Regarding claim 8, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, further comprising: determining each of the plurality of masks to indicate a group of masked elements representing a region of the image, wherein the region matches a predetermined patch in size and shape or that is larger than a patch that has predetermined dimensions in at least one of dimension. (Yan, Fig. 3; “generate multi-scale striped masks to remove a part of regions”, [abstract]; “changing the width of the white strips, we can obtain the masks in different scales ... stripes in vertical and horizontal directions”, p3112:c2; masks comprises stripes; a stripe is a "group of masked elements" representing a region; the stripe has a predetermined "width" (scale) and extends across the image, making it larger than a "patch" of predetermined dimensions in at least one dimension (length)) Regarding claim 11, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the device according to claim 10, wherein the technical system includes an at least partially autonomous computer-controlled machine, the at least partially autonomous computer-controlled machine including a robot, or a vehicle, or a domestic appliance, or a power tool, or a manufacturing machine, or a personal assistant, or an access control system. (Konrardy, “Methods and systems for autonomous and semi-autonomous vehicle control relating to anomalies are disclosed. Anomalous conditions with a vehicle operating environment, such as ice patches or flooded roads, may be identified and categorized using autonomous vehicle operating data, and corrective actions to mitigate the impact of such anomalies may be taken. Corrective actions may include maneuvering the vehicle in the area of the anomaly or rerouting the vehicle around the area of the anomaly”, [abstract]; Yan’s anomaly detection method may be applied to an autonomous vehicle to identify road anomalies such as ice patches or flooded roads) Regarding claim 12, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method according to claim 1, further comprising: determining the plurality of masks to indicate several groups of masked elements representing several different regions of the image, wherein the different regions individually match a predetermined patch in size and shape or are larger than a patch having predetermined dimensions in at least one dimension. (Yan, Fig. 3; “generate multi-scale striped masks to remove a part of regions”, [abstract]; “changing the width of the white strips, we can obtain the masks in different scales”, p3112:c2; “shape of the removed regions should have multiple directions ... setting the stripes in vertical and horizontal directions”, p3112:c1-c2; generating masks consisting of stripes (groups of masked elements); these stripes have specific, predetermined widths ("scales") and directions; a stripe is inherently larger than a small patch in at least one dimension (length), satisfying the limitation of being larger than a patch with predetermined dimensions in at least one dimension) Regarding claim 13, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method of claim 1, further comprising: generating a consistency map that, for each of the pixel positions of the classification map, provides a quantification of a level of consistency between the classification of the pixels of the reconstructed images that correspond to the respective pixel position; (Tatsumi, Fig. 5; “when all the synthesis target masks are superimposed, the ratio of the number of superimpositions of the unprocessed portions (non-mask portions) to the total number is obtained to calculate a basis rate for each region. Then, the basis map is generated by visualizing the obtained basis rate of each region”, [0057]; “the basis coefficient in this region 603b is calculated as (1×0.9+1×0.8)/2=85%”, [0079]; generate a "basis map" which quantifies how often (or with what reliability score) the classification was consistent across the multiple masked passes; the "basis rate" or "basis coefficient" serves as the quantification of the level of consistency) determining a level of image consistency between the reconstructed images based on the consistency map; and (Tatsumi, “the basis rate in the regions 503 c and 503 d is calculated as 1/2=50%”, [0059]; the values in the basis map (consistency map) directly indicate the level of consistency; a low basis rate (e.g., 0% or 50%) indicates a level of inconsistency between the results derived from the different masked images; Yan also teaches computing an "anomaly score" based on error maps, which is a measure of inconsistency, “the anomaly score is computed based on these error maps and we take the score to perform anomaly detection”, p3112:c1) determining whether to enter a safe mode based on the determined level of image consistency. (Konrardy, “determining a corrective action relating to the anomaly based upon the type and severity of the anomaly automatically controlling the vehicle to perform one or more of the following control actions to avoid the anomaly: reducing speed, stopping, turning, swerving, and/or rerouting the vehicle”, c4:40-50; “usage restrictions may be enacted to limit operation of the vehicle 108 (block 820), such as disabling autonomous operation features associated with the malfunctioning component”, c50:35-40; “If the vehicle operator is determined not to be capable of resuming operation, the controller 204 may cause the vehicle to stop or take other appropriate action”, c28:45-50; upon detecting an issue (which in the combined system is the inconsistency/anomaly), the system enters a restricted state, disables features, or stops, which constitutes entering a "safe mode") Regarding claim 14, the combination of Yan and Tatsumi and Konrardy teaches its/their respective base claim(s). The combination further teaches the method of claim 1, further comprising: generating a consistency map that, for each of the pixel positions of the classification map, provides a quantification of a level of consistency between the classifications of the pixels of the reconstructed images that correspond to the respective pixel position; (Tatsumi, Fig. 5; “ratio of the number of superimpositions ... basis map is generated by visualizing the obtained basis rate”, [0057]; as established in Claim 13, Tatsumi teaches generating a map that quantifies consistency) determining a level of image consistency between the reconstructed images based on the consistency map; and (Tatsumi” the basis rate in the regions 503c and 503d is calculated as 1/2=50%”, [0059]; basis map values indicate the level of consistency/inconsistency) determining whether to output an alarm based on the determined level of image consistency. (Konrardy, “presenting a notification to a vehicle operator”, c4:60-65; “communicating an alert to other vehicles”, c48:10-15; “the alert includes an indication of the type of the anomaly”, [claim 1]; outputting alarms/notifications when an anomaly (inconsistency) is detected) Regarding claim 15, the combination of Yan and Tatsumi teaches a computer-implemented method for operating a technical system, the method comprising: applying a plurality of masks to an image, the plurality of masks masking different regions of the image than one another, wherein the application of the plurality of masks generates a plurality of respective masked images; for each of the plurality of masked images respectively: (a) reconstructing a respective masked-out region of the respective masked image based on a context provided by an unmasked region of the respective masked image, thereby generating a respective reconstructed image; and (b) performing a classification on the respective reconstructed image that assigns to each pixel of the respective reconstructed image a respective one of a plurality of classifications, each of the classifications corresponding to a respective one of a plurality of object types, wherein, for each of the classified pixels, the classification of the respective pixel identifies the respective pixel as being part of a depiction of an object of the object type to which its respective classification corresponds; (Yan, Tatsumi, see comments on claim 1) The combination of Yan and Tatsumi further teaches: generating a consistency map that: (a) includes a plurality of pixel positions for each of which each of the reconstructed images includes a respective pixel; and (b) for each of the pixel positions of the consistency map, provides a quantification of a level of consistency between the classifications of the pixels of the reconstructed images that correspond to the respective pixel position; (Tatsumi, Fig. 5; “when all the synthesis target masks are superimposed, the ratio of the number of superimpositions of the unprocessed portions (non-mask portions) to the total number is obtained to calculate a basis rate for each region. Then, the basis map is generated by visualizing the obtained basis rate of each region”, [0057]; “the basis coefficient in this region 603 b is calculated as (1×0.9+1×0.8)/2=85%”, [0079]; generating a map (basis map) that quantifies how often (or with what confidence) a specific classification was made across the plurality of masked images for each pixel region. This "basis rate" or "basis coefficient" is a direct quantification of the level of consistency of the classification across the different image versions) determining a level of image inconsistency between the reconstructed images based on the consistency map; and (Yan, “the anomaly score is computed based on these error maps based on the determined level of image inconsistency, entering the technical system into a safe mode”, p3112:c1; “obtain an error map by computing the difference between the reconstructed image and the input image”, [abstract]; using the maps to calculate a score representing deviation/anomaly; Tatsumi, “ratio of the number of superimpositions... to the total number”, [0057]; it would be obvious to determine a level of inconsistency (e.g., a low basis rate/consistency score in Tatsumi or high anomaly score in Yan) based on the generated map; a low "basis rate" in Tatsumi's map explicitly indicates that the classification was inconsistent across the multiple masked passes) The combination of Yan and Tatsumi does not expressly disclose but Konrardy teaches: based on the determined level of image inconsistency, entering the technical system into a safe mode. (Konrardy, “one or more usage restrictions may be enacted to limit operation of the vehicle 108 (block 820), such as disabling autonomous operation features associated with the malfunctioning component”, c50:35-40; “automatically controlling the vehicle to perform one or more of the following control actions to avoid the anomaly: reducing speed, stopping, turning, swerving, or re-routing the vehicle”, c4:45-50; “If the vehicle operator is determined not to be capable of resuming operation, the controller 204 may cause the vehicle to stop or take other appropriate action”, c28:45-50; upon detecting anomalies, malfunctions, or unreliable conditions (high inconsistency/risk), the system enters a restricted state, disables features, or stops the vehicle, which constitutes entering a "safe mode"; Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the autonomous vehicle response and safety strategies of Konrardy (such as entering a restricted operation mode or stopping upon detecting anomalies) into the anomaly and inconsistency detection framework formed by Yan and Tatsumi in order to ensure the safety of the technical system by automatically triggering a fail-safe or "safe mode" when the consistency map indicates a high level of inconsistency or low confidence in the semantic classification of the environment. The combination of Yan and Tatsumi and Konrardy also teaches other enhanced capabilities. Response to Arguments Applicant's arguments filed on 12/11/2025 with respect to one or more of the pending claims have been fully considered but are moot in view of the new ground(s) of rejection. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center. for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /JIANXUN YANG/ Primary Examiner, Art Unit 2662 2/4/2026
Read full office action

Prosecution Timeline

Sep 08, 2023
Application Filed
Sep 07, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Feb 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602917
OBJECT DETECTION DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602853
METHODS AND APPARATUS FOR PET IMAGE RECONSTRUCTION USING MULTI-VIEW HISTO-IMAGES OF ATTENUATION CORRECTION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12590906
X-RAY INSPECTION APPARATUS, X-RAY INSPECTION SYSTEM, AND X-RAY INSPECTION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586223
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL OBJECT COMBINING STRUCTURED LIGHT AND PHOTOMETRY AND TERMINAL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586152
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TRAINING IMAGE PROCESSING MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month