Prosecution Insights
Last updated: April 19, 2026
Application No. 18/913,074

BEHAVIOR PREDICTION USING SCENE-CENTRIC REPRESENTATIONS

Non-Final OA §103
Filed
Oct 11, 2024
Examiner
LAMBERT, GABRIEL JOSEPH RENE
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Waymo LLC
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
87 granted / 130 resolved
+14.9% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
27.5%
-12.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/29/2025 and 10/11/2024 have been fully considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et al. US20210150350A1 (henceforth Gao) in view of Ngiam et al. “Scene Transformer: A unified architecture for predicting multiple agent trajectories”, CoRR, March 4, 2022, arXiv: 2106.08417v3, 25 pages (henceforth Ngiam). Regarding claim 1, Gao discloses: A method performed by one or more computers, the method comprising: obtaining scene context data characterizing a scene in an environment at a current time point, wherein the scene includes a set of agents that comprises a plurality of target agents, (See at least Fig. 2 and Para. 0065, “The system receives an input that includes (i) data characterizing observed trajectories for each of one or more agents in an environment and (ii) map features of a map of the environment”. The method includes obtaining scene context data and a set of agents that comprises a plurality of target agents.) and wherein the scene context data includes features of the scene (See at least Para. 0065, “(iii) map features of a map of the environment”.) generating a scene-centric encoded representation of the scene in the environment by processing the scene context data using an encoder neural network; (See at least Para. 0060, “In particular, the vectorized representation 250 shows a polyline 252 representing a crosswalk as a sequence of four vectors, lane boundary polylines 254, 256, and 258 representing boundaries defining two lanes as three sequences of three vectors, and a trajectory polyline 260 that represents the observed trajectory of an agent as a sequence of three vectors” and Para. 0061 “The vectors defining the polylines in the vectorized representation 250 can then be processed by an encoder neural network to generate respective polyline features of each of the polylines”. for each target agent: obtaining agent-specific features for the target agent; (Para. 0065, “The system receives an input that includes (i) data characterizing observed trajectories for each of one or more agents in an environment”.) processing the agent-specific features for the target agent and the scene-centric encoded representation of the scene using a fusion neural network to generate a fused scene representation for the target agent; (See at least Para. 0077, “In some other implementations, the system generates trajectory predictions for multiple target agents in parallel. In these implementations, the coordinates in the respective vectors are in a coordinate system that is shared between the multiple target agents, e.g., centered at the center of a region that includes the positions all of the multiple target agents at the current time step” and Para. 0078, “The system processes a network input that includes the (i) respective polylines of the observed trajectories and (ii) the respective polylines of each of the features of the map using an encoder neural network to generate polyline features for each of the one or more agents (step 308).” An encoder neural network is used to fuse the polylines of the observed trajectories and the polylines of each of the features of the map to generate polyline features for each of the one or more agents (i.e. a fused scene representation for the target agent).) and processing the fused scene representation for the target agent using a decoder neural network to generate a trajectory prediction output for the target agent that predicts a future trajectory of the target agent after the current time point in an agent-centric coordinate system for the target agent. (See at least Fig. 4, Para. 0080, “For one or more of the agents, the system generates a predicted trajectory for the agent from the polyline features for the agent (step 310). This will also be described in more detail below with reference to FIG. 4.” Further see Para. 0099,” To generate the trajectory prediction 450 for a given agent, the system then processes the polyline features for the given agent using the trajectory decoder neural network 430 to generate the trajectory prediction 450 for the given agent”. A trajectory decoder is used to generate a trajectory prediction output for the target agent after the current time point.) Gao does not specifically state the limitation “wherein the scene context data includes features of the scene in a scene-centric coordinate system.” However, Ngiam teaches: wherein the scene context data includes features of the scene in a scene-centric coordinate system. (See at least Page 6, Section 3.4, lines 1-3, “The output of our model is a tensor of shape [FAT7] representing the location and heading of each agent at the given time step. Because the model uses a scene-centric representation for the locations through positional embeddings, the model is able to predict all agents simultaneously in a single feed-forward pass.” The scene context data includes features of the scene in a scene-centric coordinate system.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Ngiam to include “wherein the scene context data includes features of the scene in a scene-centric coordinate system” in order to “predict all agents simultaneously in a single feed-forward pass” (Ngiam, Section 3.4, lines 1-3). This would create a more robust system by including features of the scene in a scene-centric coordinate system, such that all agents can be predicted simultaneously. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Ngiam. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 2, Gao does not specifically state the limitation “wherein the scene-centric encoded representation of the scene in the environment comprises a sequence of scene embeddings.” However, Ngiam teaches: wherein the scene-centric encoded representation of the scene in the environment comprises a sequence of scene embedding (See at least Page 4, Section 3.1, lines 1-2, “We use a scene-centric embedding where we use an agent of interest’s position as the origin 2, and encode all road graph and agents with respect to it.”) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Ngiam to include “wherein the scene-centric encoded representation of the scene in the environment comprises a sequence of scene embeddings” in order to “predict all agents simultaneously in a single feed-forward pass” (Ngiam, Section 3.4, lines 1-3). This would create a more robust system by including features of the scene in a scene-centric coordinate system, such that all agents can be predicted simultaneously. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Ngiam. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 3, Gao does not specifically state the limitation “generating a sequence of agent embeddings from the agent-specific features using the fusion neural network.” However, Ngiam teaches: generating a sequence of agent embeddings from the agent-specific features using the fusion neural network (See at least Appendix A, Page 16, in the “Embedding of agents and road graph” section, wherein “To generate input features, we use sinusoidal positional embeddings (Vaswani et al., 2017) to embed the time (for agents and dynamic roadgraph) and xyz-coordinates separately into a D dimensional features per dimension. We encode the type of each object using a one-hot encoding (e.g. object type, lane type, etc), and concatenate any other features provided in the data set such as yaw, width, length, height, and velocity. Dynamic road graphs have a second one-hot encoding indicating state, like the traffic light state” and “For agents and the dynamic road graph, we use a 2 layer MLP with a hidden and output dimension of Dto produce a final feature per agent or object and per time step”. Therefore, a sequence of agent embeddings is generated using the neural network.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Ngiam to include “generating a sequence of agent embeddings from the agent-specific features using the fusion neural network” in order to “produce a final feature per agent or object and per time step” (Appendix A, Page 16, “Embedding of agents and road graph”, Ngiam). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Ngiam. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 4, Gao does not specifically state the limitation “wherein the fusion neural network comprises at least one cross-attention neural network block that performs cross attention between the sequence of scene embeddings and the sequence of agent embeddings.” However, Ngiam teaches: wherein the fusion neural network comprises at least one cross-attention neural network block that performs cross attention between the sequence of scene embeddings and the sequence of agent embedding (See at least Fig. 2 “cross-attention”, and further see Page 5, Section 3.2, “Cross-attention. In order to exploit side information, which in our case is a road graph, we use cross-attention to enable the agent features to be updated by attending to the road graph. Concretely, we calculate the queries from the agents, but the keys and values come from the embeddings of the road graph.” The fusion neural network comprises a cross-attention neural network that performs cross attention between the scene embeddings and the sequence of agent embeddings.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Ngiam to include “wherein the fusion neural network comprises at least one cross-attention neural network block that performs cross attention between the sequence of scene embeddings and the sequence of agent embeddings” since ” road graph representation is also permutation-equivariant and shared across all agents in the scene” (Page 5, Section 3.2, “Cross-attention”, Ngiam). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Ngiam. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 6, Gao discloses: wherein the trajectory prediction output defines a probability distribution over possible future trajectories of the target agent after the current time point. (See at least Para. 0036, “the trajectory prediction output 152 for a given agent defines a respective probability distribution over possible future trajectories for the given agent”. A probability distribution over possible future trajectories for the target agent after the current time point is defined.) Regarding claim 7, Gao discloses: the scene context data comprises data generated from data captured by one or more sensors of an autonomous vehicle, and the plurality of target agents are agents in a vicinity of the autonomous vehicle in the environment. (See at least Para. 0019, “This specification describes how a vehicle, e.g., an autonomous or semi-autonomous vehicle, can use a trained machine learning model, referred to in this specification as a “trajectory prediction system,” to generate a respective trajectory prediction for each of one or more surrounding agents in the vicinity of the vehicle in an environment”.) Regarding claim 8, Gao discloses: providing (i) the trajectory prediction output for the plurality of target agents, (ii) data derived from the trajectory prediction output, or (iii) both to an on-board system of the autonomous vehicle for use in controlling the autonomous vehicle. (See at least Para. 0019, and Para. 0025, wherein trajectory prediction output for the plurality of target agents is used in controlling the autonomous vehicle.) Regarding claim 9, Gao discloses: wherein the trajectory prediction output is generated on-board the autonomous vehicle. (See at least Para. 0022, “While this specification describes that trajectory prediction outputs are generated on-board an autonomous vehicle, more generally, the described techniques can be implemented on any system of one or more computers that receives data characterizing scenes in an environment.” The trajectory prediction output is generated on-board the autonomous vehicle.) Regarding claim 10, Gao discloses: the context data comprises data generated from data that simulates data that would be captured by one or more sensors of an autonomous vehicle in the real-world environment, and the plurality of target agents are agents in a vicinity of the simulated autonomous vehicle in the computer simulation. (See at least Para. 0012, “modeling the high-order interactions among all components” and Para. 0019, “This specification describes how a vehicle, e.g., an autonomous or semi-autonomous vehicle, can use a trained machine learning model, referred to in this specification as a trajectory prediction system, to generate a respective trajectory prediction for each of one or more surrounding agents in the vicinity of the vehicle in an environment.”. The data is simulated (i.e. in a model) that includes data that would be captured by one or more sensors of an autonomous vehicle and a plurality of target agent that are in the vicinity of the simulated autonomous vehicle.) Regarding claim 11, Gao discloses: providing (i) the trajectory prediction output, (ii) data derived from the trajectory prediction output, or (iii) both for use in controlling the simulated autonomous vehicle in the computer simulation. (See at least Para. 0019 and Para. 0044.) Regarding claim 14, Gao discloses: wherein the scene context data comprises road graph context data characterizing road features in the scene. (See at least Para. 0031, “ Map features can include lane boundaries, crosswalks, stoplights, road signs, and so on.”) Regarding claim 15, Gao discloses: wherein the scene context data comprises traffic signal context data characterizing at least respective current states of one or more traffic signals in the scene. (See at least Para. 0074, “The respective vector can also include attribute features of the map feature. For example, the vector can include an identifier of the road feature type, e.g., crosswalk, stop light, lane boundary, and so on. As another example, the vector can include, for lane boundaries, the speed limit at the corresponding section of the lane. As yet another example, the vector can include, for stoplights, the current state, e.g., green, yellow, or red, of the stoplight at the most recent time point.” The scene context data comprises a traffic signal context data, which characterizes the current state of the traffic signal in the scene.) Regarding claim 16, Gao does not specifically state the limitation “wherein the agent-specific features for the target agent comprise a combination of features in the scene-centric coordinate system and features in the agent-centric coordinate system.” However, Ngiam teaches: wherein the agent-specific features for the target agent comprise a combination of features in the scene-centric coordinate system and features in the agent-centric coordinate system (See at least Page 6, Section 3.4, “The output of our model is a tensor of shape [FAT7] representing the location and heading of each agent at the given time step. Because the model uses a scene-centric representation for the locations through positional embeddings, the model is able to predict all agents simultaneously in a single feed-forward pass. This design also makes it possible to have a straight-forward switch between joint future predictions and marginal future predictions”. Further see the Abstract, “Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction.” Therefore, the agent-specific features comprise a combination of both features in the scene-centric coordination system and features in the agent-centric coordinate system.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Ngiam to include “wherein the agent-specific features for the target agent comprise a combination of features in the scene-centric coordinate system and features in the agent-centric coordinate system” in order to “produce a final feature per agent or object and per time step” (Appendix A, Page 16, “Embedding of agents and road graph”, Ngiam). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Ngiam. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 17, Gao and Ngiam discloses the same limitations as recited in claim 1 above, and is therefore rejected under the same rejection and obviousness rational. Regarding claim 18, Gao and Ngiam discloses the same limitations as recited in claim 1 above, and is therefore rejected under the same rejection and obviousness rational. Claims 5 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Gao and Ngiam further in view of Jammalamadaka et al. US20200033855A1 (henceforth Jammalamadaka). Regarding claim 5, Gao and Ngiam discloses the limitations as recited in claim 1 above. Gao further discloses: wherein the agent-specific features for the target agent comprises agent history context data characterizing current states of the target agent. (See at least Para. 0030, “The scene data 142 characterizes the current state of the environment surrounding the vehicle 102 as of the current time point”.) Gao does not specifically state the limitation “wherein the agent-specific features for the target agent comprises agent history context data characterizing previous states of the target agent”. However, Jammalamadaka teaches: wherein the agent-specific features for the target agent comprises agent history context data characterizing previous states of the target agent. (See at least Para. 0050, “In other words a history of states/observations for an agent can be accumulated, for example, the Viterbi algorithm or another algorithm, to make the final prediction.” The agent-specific features comprises agent history characterizing previous states of the target agent.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Jammalamadaka to include “wherein the agent-specific features for the target agent comprises agent history context data characterizing previous states of the target agent” in order to enhance the prediction of the agent’s trajectory, and “it is desirable to provide systems and methods that are capable of predicting the behavior of various entities or agents encountered by an autonomous vehicle” (Para. 0004, Jammalamadaka). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Jammalamadaka. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 12, Gao and Ngiam discloses the limitations as recited in claim 1 above. Gao further discloses: wherein the scene context data comprises target agent history context data characterizing current states of the plurality of target agents. (See at least Para. 0031, “the scene data 142 includes at least (i) data characterizing observed trajectories for each of one or more agents in an environment, i.e., observed trajectories for one or more of the surrounding agents,” wherein the target agent history context data characterizes current states of the plurality of target agents.) Gao does not specifically state the limitation “wherein the scene context data comprises target agent history context data characterizing previous states of the plurality of target agents”. However, Jammalamadaka teaches: wherein the scene context data comprises target agent history context data characterizing previous states of the plurality of target agents. (See at least Para. 0050, “In other words a history of states/observations for an agent can be accumulated, for example, the Viterbi algorithm or another algorithm, to make the final prediction.” The scene context data comprises target agent history characterizing previous states of the target agent.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Jammalamadaka to include “wherein the scene context data comprises target agent history context data characterizing previous states of the plurality of target agents” in order to enhance the prediction of the agent’s trajectory, and “it is desirable to provide systems and methods that are capable of predicting the behavior of various entities or agents encountered by an autonomous vehicle” (Para. 0004, Jammalamadaka). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Jammalamadaka. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 13, Gao discloses: wherein the agent-specific features are a subset of the target agent history data characterizing current states of the plurality of target agents. (See at least Para. 0031, “the scene data 142 includes at least (i) data characterizing observed trajectories for each of one or more agents in an environment, i.e., observed trajectories for one or more of the surrounding agents,” wherein the target agent history context data characterizes current states of the plurality of target agents.) Gao does not specifically state the limitation “wherein the agent-specific features are a subset of the target agent history data characterizing previous states of the plurality of target agents.” However, Jammalamadaka teaches: wherein the agent-specific features are a subset of the target agent history data characterizing previous states of the plurality of target agents (See at least Para. 0050, “In other words a history of states/observations for an agent can be accumulated, for example, the Viterbi algorithm or another algorithm, to make the final prediction.” The scene context data comprises target agent history characterizing previous states of the target agent.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Jammalamadaka to include “wherein the agent-specific features are a subset of the target agent history data characterizing previous states of the plurality of target agent” in order to enhance the prediction of the agent’s trajectory, and “it is desirable to provide systems and methods that are capable of predicting the behavior of various entities or agents encountered by an autonomous vehicle” (Para. 0004, Jammalamadaka). This would create a more robust behavior prediction of agents on the road. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Gao and Jammalamadaka. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen US12420844B2 discloses techniques to generate trajectory predictions. In at least one embodiment, trajectory predictions are generated based on, for example, one or more neural networks. (See Abstract) Varadarajan et al. US20220297728A1 discloses agent trajectory prediction using context-sensitive fusion. (See Abstract) Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL J LAMBERT whose telephone number is (571)272-4334. The examiner can normally be reached M-F 10:00 am- 6:00 pm MDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669 /G.J.L./ Examiner Art Unit 3669
Read full office action

Prosecution Timeline

Oct 11, 2024
Application Filed
Dec 23, 2025
Non-Final Rejection — §103
Mar 16, 2026
Applicant Interview (Telephonic)
Mar 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583464
STREAMING OBJECT DETECTION AND SEGMENTATION WITH POLAR PILLARS
2y 5m to grant Granted Mar 24, 2026
Patent 12584761
METHODS AND SYSTEMS FOR PROVIDING DYNAMIC IN-VEHICLE CONTENT BASED ON DRIVING AND NAVIGATION DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12534880
WORKING MACHINE
2y 5m to grant Granted Jan 27, 2026
Patent 12512901
D-ATIS COLLECTION AND DISSEMINATION SYSTEMS AND METHODS
2y 5m to grant Granted Dec 30, 2025
Patent 12497070
VEHICLE BEHAVIOR GENERATION DEVICE, VEHICLE BEHAVIOR GENERATION METHOD, AND VEHICLE BEHAVIOR GENERATION PROGRAM PRODUCT
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
79%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month