Prosecution Insights
Last updated: April 19, 2026
Application No. 18/454,085

METHOD AND APPARATUS OF OBTAINING POSITION OF STATIONARY TARGET

Final Rejection §103
Filed
Aug 23, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
42dot Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The IDS filed 02/05/2026 has been received and considered. The Applicant’s Remarks filed 02/05/2026 have been received and considered. The 101 rejection in the non-final office action mailed 11/07/2025 is hereby withdrawn. Claims 1, 5 – 6, and 16 – 17 have been amended. Claims 4 and 7 have been cancelled. Claims 1 – 3, 5 – 6, and 8 – 17, the remaining claims pending in this application, have been rejected. Response to Applicant’s Remarks Applicant’s remarks were filed 02/05/2026 regarding amendments to independent claims 1 and 16. Referencing page 10 and 11 of Applicant’s Remarks, Applicant argues that that Zeng does not disclose “initialization” and that “no operation is performed to delete accumulated data, reset that data, or invalidate historical information”. However, Applicant has not provided any rationale or evidence to support this conclusory statement. The Examiner disagrees with the remarks made by the Applicant. Examiner notes that the Zeng’s patent primarily discloses the process of determining and updating stationary and dynamic fusion tracks, the process relying heavily on the radar system as disclosed in Column 10, lines 49 – 67 and Column 11, lines 1 – 4. This is further elaborated by Zeng’s figures from 9A – 12C and further explained Column 12, lines 41 – 67 and Column 13, lines 1 – 31. Applicant’s statement on pages 10-11 amounts to the claimed initialization: new values replace pre-existing values. From [0003] of the specification, this application is generally directed to self driving vehicles, which are understood to more or less continuously move through the environment such that different objects in view are “stationary” for only a limited period of time. The vehicle and the camera repeatedly update their internal model of the world. This is initialization or updating or refreshing, rather than some abstract “initialization” that might occur the very first time the vehicle is ever powered on. A typical definition of “initializing” is from https://www.merriam-webster.com/dictionary/initialize (accessed 2/27/2026) “initialize: transitive verb: to set (something, such as a computer program counter) to a starting position, value, or configuration”. The Examiner maintains that the combined prior art referenced in the non-final mailed 11/07/2025 do indeed teach the newly added features of the claims, as detailed below. Claim Objections Claim 5 recites “The method of claim 41, wherein a maximum number of radar points included in the accumulated points associated with the fusion track is predetermined”. It should read as “The method of claim 1, wherein a maximum number of radar points included in the accumulated points associated with the fusion track is predetermined”. Claim 6 recites “The method of claim 41, wherein the updating of the accumulated points associated with the fusion track comprises compensating previous accumulated points for a distance traveled by an ego vehicle from a previous update time point.”. It should read as “The method of claim 1, wherein the updating of the accumulated points associated with the fusion track comprises compensating previous accumulated points for a distance traveled by an ego vehicle from a previous update time point.”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 3, 5 – 6, 8 – 13, and 15 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Non-Patent Literature "Robust localization based on radar signal clustering" to Schuster et al. (hereinafter Schuster) in view of Non-Patent Literature "Traffic Incident Detection Based on mmWave Radar and Improvement Using Fusion with Camera" to Tao et al. (hereinafter Tao) in further view of US Patent No. (10602242) B2 to Zeng. Claim 1 Regarding claim 1, Schuster teaches a method of obtaining a position of a stationary target, the method comprising: obtaining a center point of the stationary target based on the collected radar points ("The center point C is given by the center of mass of the radar targets according to their weights. ", Section IV: Cluster Slam - A. Map Representation). Schuster does not teach generating a fusion track based on data collected from a radar and data collected from a camera; determining whether the fusion track is in a stationary state; in response to the fusion track being in the stationary state, collecting radar points associated with the fusion track; and wherein the collecting of the radar points associated with the fusion track comprises: determining radar points assigned to the fusion track from among the collected data; and updating accumulated points associated with the fusion track based on the radar points assigned to the fusion track, and in response to the fusion track not being in the stationary state, initializing the accumulated points. However, Tao teaches generating a fusion track based on data collected from a radar and data collected from a camera (Figure 6; "In this paper an incident detection approach is proposed via the fusion of traffic environment perception data introduced by mmWave radar and camera. The incident detection performance of radar sensors is analysed, and a method based on the fusion of radar and camera is introduced. The fusion of the sensors can improve the accuracy of incident detection in real traffic scenarios.", Introduction); PNG media_image1.png 237 415 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Schuster to incorporate fusing the camera data and the radar data to generate results, as disclosed by Tao. The suggestion/motivation for doing so would have been by fusing the data, the output would have more accurate data in areas/objects detected from one sensor, that were originally lacking from the correlating areas/objects detected by the other sensor, like depth values (better with radar) or object recognition (better with camera), in order to improve the accuracy of incident detection in real traffic scenarios. Tao further teaches determining whether the fusion track is in a stationary state ("Due to camera’s excellent detection performance for stationary targets, the information fusion can greatly improve the performance of inference-based detection; thus better accuracy for queued and parking incidents detection is obtained.", Section 4.2. Traffic Incident Detection by Radar-Camera Fusion); in response to the fusion track being in the stationary state, collecting radar points associated with the fusion track ("After calibration, the CC and the RC system coincide, and the detection results of the radar and the camera are spatial alignments. In a unified coordinate system, the fusion process of radar and camera is implemented." Section 4.2. Traffic Incident Detection by Radar-Camera Fusion). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao, to incorporate determining and collecting radar points of a fusion track in a stationary state, as disclosed by Tao. The suggestion/motivation for doing so would have been to know where stationary tracks are and determining potential maneuvers needed to avoid them if necessary. Tao teaches even further wherein the collecting of the radar points associated with the fusion track comprises: determining radar points assigned to the fusion track from among the collected data (Figure 2); and updating accumulated points associated with the fusion track based on the radar points assigned to the fusion track ("Lane queuing detection. During the tracking of vehicle targets in a lane, the radar updates the state of the vehicle in real time.", Section 3.4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao, to incorporate updating the fusion track based on the radar points, as disclosed by Tao. The suggestion/motivation for doing so would have been to maintain a clear perspective of the trajectories of other objects as to avoid potential collisions. Neither Schuster, or Tao, or the combination, teach in response to the fusion track not being in the stationary state, initializing the accumulated points. However, Zeng teaches in response to the fusion track not being in the stationary state, initializing the accumulated points ("At the final stage of the radar sensor processing pipeline 560 data classifiers for classifying radar point into stationary or dynamic point sets is performed. The filtered data points are processed additionally by data association 565 with fusion data tracks and forwarded to the fusion module 575 and resulting in updates of measurement of the fusion data tracks by the fusion track measurement and update module 580 for the target of interest.", Column 9, lines 50 -57). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao, to incorporate initializing the points of a track that is no longer in a stationary state, as disclosed by Zeng. The suggestion/motivation for doing so would have been to keep updating the trajectories of all moving objects in the vicinity to avoid collision. Claim 2 Regarding claim 2, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. Schuster further teaches wherein the generating of the fusion track comprises: detecting one or more radar-based objects based on the data collected from the radar (Figure 1; "In contrast to these, Cluster-SLAM utilizes reduced data in form of point based radar targets common in the automotive industry [2]. These contain amplitude, velocity and spatial coordinates without covariances. Cluster-SLAM solves several problems simultaneously.", Introduction). PNG media_image2.png 229 342 media_image2.png Greyscale Schuster does not teach detecting one or more camera-based objects based on the data collected from the camera; and generating the fusion track by determining objects associated with each other from among the one or more radar-based objects and the one or more camera-based objects However, Tao further teaches detecting one or more camera-based objects based on the data collected from the camera (Figure 2); and PNG media_image3.png 237 415 media_image3.png Greyscale generating the fusion track by determining objects associated with each other from among the one or more radar-based objects and the one or more camera-based objects (Figure 2). PNG media_image4.png 237 415 media_image4.png Greyscale Claim 3 Regarding claim 3, dependent upon claim 2, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 2. Neither Schuster, or Tao, or the combination explicitly teach wherein the determining of the objects associated with to each other is based on distances from an ego vehicle to the one or more radar-based objects and distances from the ego vehicle to the one or more camera-based objects. However, Zeng teaches wherein the determining of the objects associated with to each other is based on distances from an ego vehicle to the one or more radar-based objects and distances from the ego vehicle to the one or more camera-based objects ("Object lists may include lists of pedestrian, vehicle physical objects and attributes may include object attributes of velocity, distance from the camera etc.", Column 6, lines 39 - 50; "The radar sensor 340 likewise detects binary data but of range, range rate, bearing angle 345 which is also sent to the association and alignment correction module 320.", Column 7, lines 54 - 57; The association and alignment correction module 320 uses parametric association and data alignment correction solutions to aggregate and combine the sensor, feature, and object level data of a target of interest from each of the heterogeneous sensors. To perform such associations, data association techniques may be employed including correlation, distance measurements, and probabilistic similarities.", Column 7, lines 61 - 67)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao and Zeng, to incorporate determining of the objects associated with to each other based on distances from an ego vehicle to the one or more radar-based objects and distances from the ego vehicle to the one or more camera-based object, as disclosed by Zeng. The suggestion/motivation for doing so would have been to allow for a potential fusion system to know the distance values from each respective sensor that correlate to the same detected object, so a more accurate determination and potential maneuvers can be made. Claim 5 Regarding claim 5, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. Schuster further teaches wherein a maximum number of radar points included in the accumulated points associated with the fusion track is predetermined (Figure 2; "The Electronic Control Unit (ECU), generating the radar targets from the raw spectra, transmits up to 64 targets per radar, resulting at a maximum of 256 radar points per scan at a rate of 20 Hz.", Section III: Sensor Setup). PNG media_image5.png 163 346 media_image5.png Greyscale Claim 6 Regarding claim 6, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. The combination of Schuster, Tao, and Zeng further teach wherein the updating of the accumulated points associated with the fusion track comprises compensating previous accumulated points for a distance traveled by an ego vehicle from a previous update time point (As best understood, rejected as applied to claim 1). Claim 8 Regarding claim 8, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. Neither Schuster, or Zeng, or the combination teach wherein the obtaining of the center point of the stationary target comprises: generating a grid map based on the data collected from the radar and the data collected from the camera; and mapping the radar points associated with the fusion track to the grid map. However, Tao further teaches wherein the obtaining of the center point of the stationary target comprises: generating a grid map based on the data collected from the radar and the data collected from the camera ("After calibration, the CC and the RC system coincide, and the detection results of the radar and the camera are spatial alignments. In a unified coordinate system, the fusion process of radar and camera is implemented." Section 4.2. Traffic Incident Detection by Radar-Camera Fusion); and mapping the radar points associated with the fusion track to the grid map (Rejected as applied directly above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao and Zeng, to incorporate generating a grid map that incorporates data from both the radar and camera sensors and mapping the radar points to the respective grid map, as disclosed by Tao. The suggestion/motivation for doing so would have been to provide and camera and radar point based unified system (as disclosed by Tao) that would allow the fusion system to know that the data at specific coordinates is data correlating to the same target from both systems, providing more accurate details on an object/target. Claim 9 Regarding claim 9, dependent upon claim 8, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 8. Neither Schuster, or Tao, or the combination explicitly teach wherein the obtaining of the center point of the stationary target further comprises: determining a target tile comprising one or more tiles comprising the mapped radar points; obtaining corner points of the target tile; and determining the center point of the stationary target based on the corner points. However, Zeng teaches wherein the obtaining of the center point of the stationary target further comprises: determining a target tile comprising one or more tiles comprising the mapped radar points ("With reference now to FIG. 2 and with continued reference to FIG. 1, FIG. 2 illustrates a multi-level fusion system 200 having a centralized fusion system 210 and a hierarchical hybrid system 220 for each of the sensors 120 of FIG. 1. The multi-level fusion system 200 transforms sensor data to convenient coordinate data for data fusion. The multi-level fusion system 200 uses algorithmic solutions for transforming sensor data and to create higher level data representations such as fused object lists, occupancy grids, and fused stixels used in scene understanding tools. In addition, template matching and semantic labeling may be employed to provide semantic understanding of external driving surrounding representations from derived data from the sensor, feature and object processing levels. In various embodiments, the hierarchical hybrid system 220, includes a sensor level processing module 230, a feature-level processing module, and an object-level processing module 250.", Column 5, lines 40 - 56; "In addition, the track level sensor is configured to enable an objects kinetic states which is the position and velocity of the object of interest and not sensor level binary data or intermediate feature data. Hence, object level data processing is performed by the radar sensor system 800 for the fusion track measurement updates.", Column 10, lines 65 - 67; Column 11, lines 1 - 4); obtaining corner points of the target tile (Figure 13; "The vision target V is given the following attributes: x for longitudinal displacement, y for lateral displacement, vx and vy, W for width, and L for length. For each object of interest 1300 V, initially the four corners are commutated, the four corners are c.sub.i, i=1, . . . 4, which are commutated as follows: x−W/2, y−L/2, x−W/2, y+L/2, x+W/2, y−L/2, and x+W/2, y+L/2. A distance is commutated from the object of interest 1300 V to each contour E, which may be defined as the shortest distance from each of the four corners to the contour.", Column 13, lines 38 - 49); and determining the center point of the stationary target based on the corner points (Figure 13, #1310). PNG media_image6.png 328 503 media_image6.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao and Zeng, to incorporate determining the corner points of a previously determined target tile and using those corner points to determine a center point of the target, as disclosed by Zeng. The suggestion/motivation for doing so would have been to keep updating the trajectories of all moving objects in the vicinity to avoid collision. Claim 10 Regarding claim 10, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. Schuster further teaches further comprising controlling driving of an ego vehicle based on the center point of the stationary target ("The Electronic Control Unit (ECU), generating the radar targets from the raw spectra, transmits up to 64 targets per radar, resulting at a maximum of 256 radar points per scan at a rate of 20 Hz. The preprocessed data consists solely of distance r, angle φ and amplitude A…Such a setup is common in use for driver assistance system development and is therefore also used for this localization approach", Section III. SENSOR SETUP). Claim 11 Regarding claim 11, dependent upon claim 1, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 1. The combination of Schuster, Tao, and Zeng further teach further comprising generating a contour of the stationary target based on the collected radar points (Rejected as applied to claim 2), where the contours of the vehicles are clearly visible in figure 1 (Schuster). Claim 12 Regarding claim 12, dependent upon claim 11, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 11. The combination of Schuster, Tao, and Zeng further teach wherein the generating of the contour of the stationary target comprises: generating a grid map based on the data collected from the radar and the data collected from the camera (Rejected as applied to claim 1), where the unified coordinate system is utilized; mapping the radar points associated with the fusion track to the grid map (Rejected as applied to claim 1), where a unified coordinate system is used; and detecting the contour based on the mapped radar points (Rejected as applied to claim 2), where figure 1 (Schuster) shows the contour of the stationary vehicles. Claim 13 Regarding claim 13, dependent upon claim 12, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 12. Schuster further teaches wherein the generating of the contour of the stationary target further comprises removing radar points corresponding to noise from among the mapped radar points (Figure 7; "The reader may notice the disturbing factors (i.e. cars, pedestrians, reflections and noise) are usually non stationary, thus they can be suppressed by a decay on the individual clusters. Therefore only the stationary objects will remain in the map.", Section IV. CLUSTER-SLAM). PNG media_image7.png 282 339 media_image7.png Greyscale Claim 15 Regarding claim 15, dependent upon claim 11, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 11. Schuster further teaches determining a nearest point based on the generated contour of the stationary target ("The Electronic Control Unit (ECU), generating the radar targets from the raw spectra, transmits up to 64 targets per radar, resulting at a maximum of 256 radar points per scan at a rate of 20 Hz. The preprocessed data consists solely of distance r, angle φ and amplitude A.", Section III. SENSOR SETUP), wherein the nearest radar point can be determined due to the distance data of each point being acquired; and controlling driving of an ego vehicle based on the nearest point ("Such a setup is common in use for driver assistance system development and is therefore also used for this localization approach", Section III. SENSOR SETUP). Claim 16, an independent device claim, is rejected for the same reasons as applied to claim 1. Claim 17, dependent upon claim 1, is rejected for the same reasons as applied to claim 1. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Non-Patent Literature "Robust localization based on radar signal clustering" to Schuster et al. (hereinafter Schuster) in view of Non-Patent Literature "Traffic Incident Detection Based on mmWave Radar and Improvement Using Fusion with Camera" to Tao et al. (hereinafter Tao) in further view of US Patent No. (10602242) B2 to Zeng in further view of Non-Patent Literature "Object detection for automotive radar point clouds – a comparison" to Scheiner et al. (hereinafter Scheiner). Claim 14 Regarding claim 14, dependent upon claim 12, Schuster, in view of Tao and Zeng, teaches the invention as claimed in claim 12. Neither Schuster, or Tao, or Zeng, or the combination explicitly teach wherein the detecting of the contour comprises applying a convex hull algorithm to the mapped radar points. However, Scheiner teaches wherein the detecting of the contour comprises applying a convex hull algorithm to the mapped radar points ("Features include simple statistical features, such as mean or maximum Doppler values, but also more complex ones like the area of a convex hull around the cluster.", Methods: Clustering and recurrent neural network classifier - Feature Extraction). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Schuster, in view of Tao and Zeng, to incorporate applying a convex hull algorithm to detect a contour of the mapped radar points, as disclosed by Scheiner. The suggestion/motivation for doing so would have been to more accurately label detected radar clusters as targets/objects as they will be easier to see with contours. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
Nov 01, 2025
Non-Final Rejection — §103
Feb 05, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month