Prosecution Insights
Last updated: April 19, 2026
Application No. 18/146,427

Method for Extracting Features from Data of Traffic Scenario Based on Graph Neural Network

Final Rejection §103
Filed
Dec 26, 2022
Examiner
TAN, DAVID H
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
31%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
30 granted / 98 resolved
-24.4% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
41 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§103
86Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Final Rejection is filed in response to Applicant Arguments/Remarks Made in an Amendment filed 12/05/2025. Claims 1, 3, 7, 8, 10, and 13 are amended. Claims 2 and 11 are cancelled. In light of the amendments the U.S.C. 101 rejections to claims 1-13 are respectfully withdrawn. In light of the amendments the U.S.C. 112 rejections to claims 10-11 are respectfully withdrawn. Claims 1, 3-10, and 12-13 remain pending. Response to Arguments Argument 1, Applicant argues in Applicant Arguments/Remarks Made in an Amendment filed 12/05/2025, pg. 13-15 that Garimella fails to teach the primary claim limitations, “combining the graph neural network and a further neural network…”, and “jointly optimize both the graph neural network the further neural network”. Response to Argument 1, Applicant’s arguments have been considered, however in light of the amendments a newly found combination of prior art () is applied to updated rejections. The examiner notes that Garimella teaches at least a first and second neural network that represent graph neural networks that extract current traffic data and subsequently predict traffic feature positions, which feed their outputs and inputs to each other in an effort to find a most accurate observation and prediction. The following paragraphs of Garimella support this interpretation. para. [0024], Since the first predicted position of the object is reflected by the updated graph node, the second distribution data may include second predicted positions from the first predicted position. The second distribution data may then be sampled to determine a second predicted position of the object. This process may be repeated any number of times para. [0049-0051], objects within the environment that were previously perceived by the autonomous vehicle 106 but may have moved to a predicted position at a future time, may be retained within the GNN and/or may be updated based on the prediction data determined from the previous GNN… the inference operations may use machine learning techniques (e.g., trained based on driving logs and/or other training data) to determine a predicted future state of the GNN based on the current state of the GNN. Claim Objections Claim 10 is objected to for the following reasons. Claim 10 recites the limitation “at processor configured to:” in line 5. The examiner recommends changing the limitation to, “a processor configured to:”. Claim 10 recites the limitations “extract features from the traffic data scenario” after the limitation “extract the features from the data of the traffic scenario” in lines 13-15. The examiner recommends removing the repeated limitation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-10, & 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20230159059 “Garimella”, and further in light of U.S. Patent Application Publication NO. 20210232918 “Cheng”. Claim 1: Garimella teaches a method for extracting features from data of a traffic scenario based on a graph neural network (i.e. para. [0034], “encoding features into a node and an edge of a graph neural network (GNN)”, wherein it is noted that the BRI for a traffic scenario encompasses sensor and map data of a vehicle in an environment), comprising: (a) establishing uniformly defined data representations for the data of the traffic scenario (i.e. para. [0036], “operation 110 may include determining first data representing a state of the object in the environment. In some examples, the state may include, but is not limited to, a pose of the object, a position of the object, an acceleration of the object, a speed of the object, a size of the object, a type of the object, a lighting state of the object, and the like”, wherein the BRI for uniformly defined data encompasses how sensor and map data are processed and have features configured in a uniform format of feature vectors including vectorized object elements encoded as a feature vector); (b) constructing a graph based on the data of the traffic scenario that has the uniformly defined data representations, wherein the constructed graph describes a temporal and/or spatial relationship between entities in the traffic scenario (i.e. para. [0021], “Once two or more graph nodes of the GNN have been determined and/or updated, an edge connecting the first node and the second node may be determined. In some examples, an edge connecting two graph nodes may be encoded with features associated with objects represented by the nodes relative to one another... The first edge may be encoded with features associated with the first object (e.g., the first and second feature) relative to the vehicle, and/or features associated with the vehicle relative to the first object”, wherein it is noted in para. [0195] that the graph node map may include spatial information and associated timestamps of objects); and (c) extracted the features from the data of the traffic scenario using by providing the constructed graph as an input to the graph neural network (i.e. para. [0047], “At operation 138, the process 100 may include determining an output representing distribution data including first predicted positions for the object 130 in the future. In some examples, an inference operation may be performed to update the node states and/or edge features of the GNN”, wherein features for predicted positions are extracted from current traffic data). (d) combining the graph neural network and a further neural network configured to perform another task to form a combined neural network, the further neural network being configured to receive the features extracted by the graph neural network as inputs (i.e. para. [0103-0104], “The first ML model 602 may be configured to process the static scene data 608 to determine scene context features 610 associated with the environment. The scene context features 610 may include a number of channels corresponding to the features of the environment at the current timestep, where each channel may represent a feature (or a feature vector) at a position of the environment… The second ML model 604 may be configured to process an entity history 612 associated with the environment. In some examples, the entity history 612 may be based on previous iterations of the GNN and may include the features associated with each entity in the environment at each of the previous timesteps. The second ML model 604 may be configured to process the entity history 612 to determine entity features 614 for each entity at the current timestep”, wherein the BRI for a further neural network encompasses how a first GNN may be combined with the current version of the GNN. Wherein it is noted that the BRI for the another task encompasses updating the predictive state of the perceived environment, which is different compared to finding a first predictive state of objects in an environment of the first version of the GNN. Wherein it is noted that the first GNN output may be input into the second GNN algorithm in order to train the combined GNNs to determine to determine a predicted future state of the GNN based on the current state of the GNN) While Garimella teaches combining two graph neural networks performing different tasks by inputting the output of one to the other, Garimella may not explicitly teach (e) training the combined neural network to jointly optimize both the graph neural network the further neural network. However, Cheng teaches (d) combining the graph neural network (i.e. para. [0026], “The denoising network 310 may be a multi-layer network that samples a sub-graph from a learned distribution of edges and outputs the sub-graph as a denoised graph”, wherein it is noted that a denoising network may be an initial graph neural network for denoising an input graph) and a further neural network configured to perform another task to form a combined neural network (i.e. para. [0026], Fig. 3, “With relaxation, the denoising networks 310 may be differentiable and may be jointly optimized with the GNN layers 320, guided by supervised downstream signals. At each stage, the GNN layer 320 outputs an embedding vector for a node. The next layer uses the embeddings in the previous layer to learn the embedding vectors of nodes in the current layer. The final node embeddings are provided by the last layer's embedding, and these final node embeddings can be used to perform the task””, wherein an input graph neural network is denoised and used as input to another GNN in order to produce a combined neural network that has dropped drops task-irrelevant edges). Cheng further teaches (e) training the combined neural network to jointly optimize both the graph neural network the further neural network (i.e. para. [0017], “The denoising networks and the GNN may be jointly optimized in an end-to-end fashion. Thus, rather than removing edges randomly, or according to pre-defined heuristics, denoising may be performed in accordance with the supervision of the downstream objective in the training phase). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add training the combined neural network to jointly optimize both the graph neural network the further neural network, to Garimella’s iterative GNN predictive model for traffic scenarios, with the joint optimization of GNNs and denoising models, as taught by Cheng. One would have been motivated to combine Garimella with Cheng, and would have had a reasonable expectation of success in doing so, in order to as robustness and generalization performance of GNNs may thereby be improved by learning to drop task-irrelevant edges (Cheng, para. [0014]). Claim 3: Garimella teaches the method as claimed in claim 1, wherein the method further comprises: (f) adjusting tags of the data of the traffic scenario by using an output of the combined neural network (i.e. para. [0051], “for other updates to the GNN that change the states of nodes, the modeling component also may perform any corresponding updates to the edge features connected to those nodes, so that the updated edge features store the accurate relative information based on the nodes associated with those edge features”, wherein the BRI for a tag encompasses a predicted position tag for the objects, which may be adjusted or updated). Claim 4: Garimella teaches the method as claimed in claim 1, wherein: the data representations comprise geometric information (i.e. para. [0193], “a map may be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to… spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)… “, wherein the BRI for geometric information encompasses polygon and image data related to spatial information) and annotation information (i.e. para. [0105], “the sampling technique 618 for sampling a predicted position distribution may be determined based on a classification type of a graph node (e.g., is the graph node associated with an autonomous vehicle, an object, a specific type of object, etc.)”, wherein the BRI for annotation information encompasses a data node may be classified as a type of object), and the geometric information and the annotation information are configured to be stored together (i.e. para. [0043, 0048], “the nodes in the GNN may store sets of attributes representing an object, and the edge features may include data indicating the relative information (e.g., positions, poses, etc.) of pairs of nodes… the graph structure of the GNN includes nodes representing features associated with a state of an object 130 and/or features associated with map elements associated with the object 130”, wherein it is noted that respective node structures store both the associated spatial and classification information about its respective object). Claim 5: Garimella teaches the method as claimed in claim 1, wherein: nodes of the graph represent the entities in the traffic scenario (i.e. para. [0021], the GNN may be updated by associating the first feature and the second feature with a graph node representing the first object), and edges of the graph represent a temporal and/or spatial relationship between the nodes (i.e. para. [0018], The GNN also may include an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN). Claim 6: Garimella teaches the method as claimed in claim 1. wherein the entities in the traffic scenario include driving lane boundaries, traffic lights or traffic signs, traffic participants (i.e. para. [0028], “classification type of a graph node (e.g., is the graph node associated with the vehicle, an object, a specific type of object, etc.)”, wherein the BRI for traffic participants encompasses another detected vehicle on the road), obstacles, and/or instances. Claim 7: Garimella teaches the method as claimed in claim 1, wherein the further neural network is a deep learning algorithm for different tasks (i.e. para. [0221], “Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning or machine-learned algorithms may include, but are not limited to… deep learning algorithms”, wherein it is noted that the deep learning algorithm may be used for determining adverse behavior maneuvers or different predicted trajectories of detected objects). Claim 8: Garimella teaches the method as claimed in claim 1, wherein the further neural network is a convolutional neural network algorithm (i.e. para. [0172], Additionally, or alternatively, the second ML model may be configured as an RNN or a convolution neural network (CNN)) , a recurrent neural network algorithm (i.e. para. [0171], the first ML model may be configured as a recurrent neural network (RNN)) , and/or a graph neural network algorithm (i.e. para. [0021], With the first feature associated with the object and the second feature associated with the environment determined, the GNN may be generated and/or updated). Claim 9: Garimella teaches the method as claimed in claim 1, wherein in step (c), the extracted features are highly abstract features (i.e. para. [0114], computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types) used to construct an environment model of the traffic scenario (i.e. para. [0211], the environment feature component 1242 may determine the feature associated with the environment by processing the data representing the view of the environment with an ML mode) . Claim 10: Garimella teaches a computing device for extracting features from data of a traffic scenario based on a graph neural network, the computing device comprising: A memory ; an At processor (i.e. para. [0209], The computing device(s) 1234 may include processor(s) 1238 and a memory) configured to: to collect data of a traffic scenario (i.e. para. [0035], At operation 102, the process 100 may include capturing and/or receiving sensor data of a physical or simulated environment)from different data sources (i.e. para. [0035], the sensor data 108 may comprise lidar data, radar data, sonar data, time-of-flight data, or other depth data) and establish uniformly defined data representations for the collected data of the traffic scenario (i.e. para. [0036], “operation 110 may include determining first data representing a state of the object in the environment. In some examples, the state may include, but is not limited to, a pose of the object, a position of the object, an acceleration of the object, a speed of the object, a size of the object, a type of the object, a lighting state of the object, and the like”, wherein the BRI for uniformly defined data encompasses how sensor and map data are processed and have features configured in a uniform format of feature vectors including vectorized object elements encoded as a feature vector); construct a graph based on the data of the traffic scenario that has the uniformly defined data representations (i.e. para. [0043], a modeling component associated with the vehicle 106 may receive vectorized representations of objects (e.g., map elements and/or objects) from the object feature component and/or the environment feature component, and may create new nodes within the GNN, remove nodes from the GNN, and/or modify existing nodes of the GNN based on the received map data and/or entity data) wherein the constructed graph describe a temporal and/or spatial relationship between entities in the traffic scenario (i.e. para. [0021], “Once two or more graph nodes of the GNN have been determined and/or updated, an edge connecting the first node and the second node may be determined. In some examples, an edge connecting two graph nodes may be encoded with features associated with objects represented by the nodes relative to one another... The first edge may be encoded with features associated with the first object (e.g., the first and second feature) relative to the vehicle, and/or features associated with the vehicle relative to the first object”, wherein it is noted in para. [0195] that the graph node map may include spatial information and associated timestamps of objects); extract the features from the data of the traffic scenario by providing the constructed graph as input to the graph neural network (i.e. para. [0047], “At operation 138, the process 100 may include determining an output representing distribution data including first predicted positions for the object 130 in the future. In some examples, an inference operation may be performed to update the node states and/or edge features of the GNN”, wherein features for predicted positions are extracted from current traffic data), extract features from the data of the traffic scenario, and use a deep learning algorithm for another task (i.e. para. [0221], “machine learning or machine-learned algorithms may include, but are not limited to… deep learning algorithms”, wherein the deep learning algorithm may be used for different predictions tasks of updating an object’s predicted position and trajectory) to optimize a graph neural network algorithm for extracting features (i.e. para. [0074], “Additionally, the modeling component may create and maintain edge features associated with node-pairs in the graph structure. As noted above, the nodes in the graph structure may store sets of attributes representing an object, and the edge features may include data indicating the relative information (e.g., positions, poses, etc.) of pairs of nodes”, wherein the BRI for optimizing encompasses how the GNN is recursively updated in order to have the most current and accurate prediction). combine the graph neural network and a further neural network configured to perform another task to form a combined neural network, the further neural network being configured to receive the features extracted by the graph neural network as inputs (i.e. para. [0103-0104], “The first ML model 602 may be configured to process the static scene data 608 to determine scene context features 610 associated with the environment. The scene context features 610 may include a number of channels corresponding to the features of the environment at the current timestep, where each channel may represent a feature (or a feature vector) at a position of the environment… The second ML model 604 may be configured to process an entity history 612 associated with the environment. In some examples, the entity history 612 may be based on previous iterations of the GNN and may include the features associated with each entity in the environment at each of the previous timesteps. The second ML model 604 may be configured to process the entity history 612 to determine entity features 614 for each entity at the current timestep”, wherein the BRI for a further neural network encompasses how a first GNN may be combined with the current version of the GNN. Wherein it is noted that the BRI for the another task encompasses updating the predictive state of the perceived environment, which is different compared to finding a first predictive state of objects in an environment of the first version of the GNN. Wherein it is noted that the first GNN output may be input into the second GNN algorithm in order to train the combined GNNs to determine to determine a predicted future state of the GNN based on the current state of the GNN) While Garimella teaches combining two graph neural networks performing different tasks by inputting the output of one to the other, Garimella may not explicitly teach to train the combined neural network to jointly optimize both the graph neural network the further neural network However, Cheng also teaches to combine the graph neural network (i.e. para. [0026], “The denoising network 310 may be a multi-layer network that samples a sub-graph from a learned distribution of edges and outputs the sub-graph as a denoised graph”, wherein it is noted that a denoising network may be an initial graph neural network for denoising an input graph) and a further neural network configured to perform another task to form a combined neural network (i.e. para. [0026], Fig. 3, “With relaxation, the denoising networks 310 may be differentiable and may be jointly optimized with the GNN layers 320, guided by supervised downstream signals. At each stage, the GNN layer 320 outputs an embedding vector for a node. The next layer uses the embeddings in the previous layer to learn the embedding vectors of nodes in the current layer. The final node embeddings are provided by the last layer's embedding, and these final node embeddings can be used to perform the task””, wherein an input graph neural network is denoised and used as input to another GNN in order to produce a combined neural network that has dropped drops task-irrelevant edges). Cheng further teaches train the combined neural network to jointly optimize both the graph neural network the further neural network (i.e. para. [0017], “The denoising networks and the GNN may be jointly optimized in an end-to-end fashion. Thus, rather than removing edges randomly, or according to pre-defined heuristics, denoising may be performed in accordance with the supervision of the downstream objective in the training phase). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add training the combined neural network to jointly optimize both the graph neural network the further neural network, to Garimella’s iterative GNN predictive model for traffic scenarios, with the joint optimization of GNNs and denoising models, as taught by Cheng. One would have been motivated to combine Garimella with Cheng, and would have had a reasonable expectation of success in doing so, in order to as robustness and generalization performance of GNNs may thereby be improved by learning to drop task-irrelevant edges (Cheng, para. [0014]). Claim 12: Garimella teaches a computer program product, comprising a computer program, wherein when the computer program is executed by a computer, the method as claimed in claim 1 is implemented (i.e. para. [0216], The memory 1218 and 1232 are examples of non-transitory computer-readable media. The memory 1218 and 1232 may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20230159059 “Garimella”, as applied to Claim 7 above, and further in light of U.S. Patent Application Publication NO. 20220188667 “Burisch”. Claim 13: Garimella teaches the method as claimed in claim 7, wherein further task performed by the further neural network includes behavior planning, trajectory planning, venerable road user prediction, agent prediction, and planning (i.e. para. [0193]. “the planner component 1224 may be communicatively coupled to the prediction component 1252 to generate predicted trajectories of objects in an environment. For example, the prediction component 1252 may generate one or more predicted trajectories for objects within a threshold distance from the vehicle 1202. In some examples, the prediction component 1252 may measure a trace of an object and generate a trajectory for the object based on observed and predicted behavior”, wherein in a case where the ML model may be configured as a recurrent neural network (RNN). Wherein the BRI for “behavior planning, trajectory planning, VRU prediction, agent prediction, and planning”, encompasses the object prediction, trajectory prediction, trajectory collision prediction, and agent operated vehicle object prediction, and avoidance planning for predicted object positions) While Garimella teaches prediction and planning using deep leaning and a recurrent neural network, Garimella may not explicitly teach that the model planning is based on deep reinforcement learning. However, Burisch teaches a prediction planning model that is based on deep reinforcement learning (i.e. para. [0043], “the prediction training framework 200A may include a known prediction module 212. In certain embodiments, the known prediction module 212 may include one or more machine leaning (ML) algorithms, which may include, for example, one or more deep learning algorithms… the one or more ANNs may include … a recurrent neural network (RNN)… deep reinforcement learning, and so forth) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add a prediction model that is based on deep reinforcement learning, to Garimella’s prediction model, with a prediction framework that includes a recurrent neural network that uses deep reinforcement learning, as taught by Burisch. One would have been motivated to combine Garimella with Burisch, and would have had a reasonable expectation of success in doing so, in order to encourage fast learning in unfamiliar scenarios. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication NO. 20190108639 “Tchapmi”, recites in para. [0017], performing a first training stage including optimizing a 3D Neural Network (3D NN) using a training data set including sets of 3D points with semantic annotations to obtain an optimized 3D NN; and performing a second training stage using the optimized 3D NN including optimizing over a joint framework including the optimized 3D NN and a graph neural network that outputs 3D point semantic labels using the training data set. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Dec 26, 2022
Application Filed
Oct 02, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443336
INTERACTIVE USER INTERFACE FOR DYNAMICALLY UPDATING DATA AND DATA ANALYSIS AND QUERY PROCESSING
2y 5m to grant Granted Oct 14, 2025
Patent 12282863
METHOD AND SYSTEM OF USER IDENTIFICATION BY A SEQUENCE OF OPENED USER INTERFACE WINDOWS
2y 5m to grant Granted Apr 22, 2025
Patent 12182378
METHODS AND SYSTEMS FOR OBJECT SELECTION
2y 5m to grant Granted Dec 31, 2024
Patent 12111956
Methods and Systems for Access Controlled Spaces for Data Analytics and Visualization
2y 5m to grant Granted Oct 08, 2024
Patent 12032809
Computer System and Method for Creating, Assigning, and Interacting with Action Items Related to a Collaborative Task
2y 5m to grant Granted Jul 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
31%
Grant Probability
46%
With Interview (+15.8%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month