Prosecution Insights
Last updated: April 19, 2026
Application No. 18/925,624

Learned Validation Metric for Evaluating Autonomous Vehicle Motion Planning Performance

Non-Final OA §103§DP
Filed
Oct 24, 2024
Examiner
TANG, BRYANT
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aurora Operations, Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
55 granted / 61 resolved
+38.2% vs TC avg
Minimal -3% lift
Without
With
+-3.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
86
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 61 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Joint Inventors This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Information Disclosure Statement The information disclosure statements (IDS) submitted on October 24th, 2024 and March 19th, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Double Patenting Claims 21-40 of this application are patentably indistinct from claims 1-20 of Application No. 18/633,191, corresponding to U.S. Patent No. 12,151,707 B1. Pursuant to 37 CFR 1.78(f), when two or more applications filed by the same applicant or assignee contain patentably indistinct claims, elimination of such claims from all but one application may be required in the absence of good and sufficient reason for their retention during pendency in more than one application. Applicant is required to either cancel the patentably indistinct claims from all but one application or maintain a clear line of demarcation between the applications. See MPEP § 822. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 12,151,707 B1. Regarding Claims 21-26, 31-36 and 40, these claims contain the same subject matter and scope as claims 1-6, 8, 17 and 20 of U.S. Patent No. 12,151,707 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the only distinguishing limitations in the instant application include the replacement of “AV trajectory” with “test trajectory”, which is simply a broader term for describing the same feature. Additionally, the limitation of the computer-implemented method being used “for validating a trajectory generated by an autonomous vehicle control system” has been removed, thus further broadening the scope for the claimed invention in the instant application over that of U.S. Patent No. 12,151,707 B1. Claims 31 and 40 simply fall under separate statutory categories, but since claim 20 of U.S. Patent No. 12,151,707 B1 already claims a “computing system”, the same analysis applies. Furthermore, instead of describing later steps in the process as “determining […] divergence values” and “providing […] divergence values to a machine-learned model”, claims 21, 31 and 40 simply replace these terms with “generating […] divergence values”, which does not change the meaning or scope of either limitation. Lastly, the final step in the process for “validating the AV trajectory based on the score” has also been removed, continuing to broaden the method and system in the instant application over that of U.S. Patent No. 12,151,707 B1. Claims 22-24 and 32-34 further claim generating a “state value” to describe a “match between the test trajectory and the reference trajectory”, while also claiming an indication of “whether the sets of divergence values received as inputs correspond to matched pairs of trajectories”. These limitations are patentably indistinct from the limitations involving validation in U.S. Patent No. 12,151,707 B1. Rather than describing this process as “matching”, claim 3 states the process indicates “whether there is a material divergence between the training trajectory and the reference trajectory”, and that the “learned parameters respectively correspond to the plurality of divergence metrics”. In other words, saying there is a matching process for two trajectories is the same as saying the process includes determining if there is a divergence or correspondence between the two trajectories, which again, are the same trajectories being described in the instant application. Claims 24 and 34 include “deploying the autonomous vehicle motion planning system”, which is the same as the limitation of “a planned trajectory output by a motion planner of the autonomous vehicle control system” in claim 17 of U.S. Patent No. 12,151,707 B1. Claims 25 and 35 are just combinations of the limitations in claims 5 and 8 in U.S. Patent No. 12,151,707 B1, with the only distinguishing limitation being the replacement of “validation time” with “first time” in the instant application. Claims 26 and 36 are the same as claim 6. Other than what has been stated, these claims are the same. Regarding Claims 27 and 37, these claims contain the same subject matter and scope as claims 7 and 9-10 of U.S. Patent No. 12,151,707 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the only distinguishing limitations in the instant application include describing the “plurality of context domains” and “a weighted combination of the plurality of component divergence values” as a “linear combination” with the term “monotonic”. Examiner notes this simply means a sequence or function that consistently moves in a single direction, thus being the same as the “linear combination” of “weighting parameters” in claim 10. Regarding Claims 28 and 38, these claims contain the same subject matter and scope as claims 4 and 9 of U.S. Patent No. 12,151,707 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 28 and 38 are just combinations of the limitations in both claims 4 and 9. Regarding Claims 29 and 39, these claims contain the same subject matter and scope as claim 14 of U.S. Patent No. 12,151,707 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 29 and 39 of the instant application remove the limitation of “the curated set of trajectory matches” and replaces it with “the match labels”, which are much broader terms for describing the same feature. Other than that, these claims are exactly the same. Regarding Claim 30, this claim contains the same subject matter and scope as claims 1-2 and 11 of U.S. Patent No. 12,151,707 B1, respectively. Although the claims at issue are not identical, they are not patentably distinct from each other because the only distinguishing limitations in the instant application include replacing the concept of a “divergence encoder […] based on the AV trajectory and using one or more machine-learned parameters, the respective component divergence” with “extracting features” as input for “generating […] the plurality of divergence values”. These limitations are not written the same, but they fall under the same scope and describe the same invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21-22, 31, 33 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US Patent Pub. No. 2023/0406345 A1), herein “Wu”, in view of Beijbom et al. (US Patent No. 11,203,362 B1), herein “Beijbom”. Regarding Claims 21, 31 and 40, Wu discloses a computer-implemented method, autonomous vehicle control system for controlling an autonomous vehicle and a computing system, comprising: (per Claim 31 only) a motion planning system that is configured to process data descriptive of an environment and output motion plans that define a motion of the autonomous vehicle in the environment, the motion planning system validated by validation operations (See 0025, “[…] a computer-implemented method may include generating, by a processing device, a controlled trajectory of an autonomous driving vehicle (ADV) in a scenario. The controlled trajectory may be executable by the ADV to drive autonomously in the scenario. The method further includes receiving a set of data acquired in a number of driving demonstrations in the scenario and identifying a distribution pattern of the set of data acquired […] compute a similarity score based on comparisons between the controlled trajectory and the distribution pattern of the number of driving trajectories.”); obtaining a test trajectory generated by an autonomous vehicle motion planning system and a reference trajectory, wherein the reference trajectory describes a reference motion of a vehicle in a reference scenario (See 0025 as referenced above. See also 0029, “[…] a first similarity score between a first controlled trajectory generated by the processing device and the set of data of the number of driving demonstrations in the scenario […] computing a second mean square error and a second similarity score between a second controlled trajectory […]” Examiner notes the first controlled trajectory is the same as a reference trajectory, and describes information of a driving scenario, while the second controlled trajectory, which is used for comparison, is the same as a test trajectory); and generating, respectively based on a plurality of divergence metrics, a plurality of divergence values that respectively characterize a plurality of differences between the test trajectory and the reference trajectory (See 0025 as referenced above. See also 0051, “[…] similarity score may be computed by calculating a Kullback-Leibler divergence (K-L divergence, or I-divergence); an f-divergence; or an H-divergence […] the similarity score may be computed by determining one or more integral probability metrics, as well as by evaluating a probability density of a predicted value of a probability distribution of the distribution pattern […]”). But does not explicitly disclose generating, by a machine-learned model and based on the plurality of divergence values, a score characterizing an aggregate divergence between the test trajectory and the reference trajectory, the machine-learned model trained to output, based on sets of divergence values received as inputs, scores that correspond to match labels respectively associated with the sets of divergence values received as inputs. Beijbom, in a similar field of endeavor, teaches generating, by a machine-learned model and based on the plurality of divergence values, a score characterizing an aggregate divergence between the test trajectory and the reference trajectory, the machine-learned model trained to output, based on sets of divergence values received as inputs, scores that correspond to match labels respectively associated with the sets of divergence values received as inputs (See Col. 1 Lines 63-67, “[…] predicting, using a machine learning model with the list of scores as input, reasonableness scores for the set of realizations; obtaining, using the one or more processors, annotations from a plurality of human annotators, the annotations indicating a reasonableness of each realization […]” See also Col. 5 Line 66 – Col. 6 Line 2, “[…] a “scene description” is a data structure (e.g., list) or data stream that includes one or more classified or labeled objects detected by one or more sensors on the AV vehicle or provided by a source external to the AV.”). In view of Beijbom’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method and system for generating multiple trajectories for comparison to determine a similarity score representing divergence between two trajectories as disclosed by Wu, a machine-learned model trained by inputting the resulting divergence values and outputting similarity scores for matching with the respective inputs, with a reasonable expectation of success, since both inventions operate in the same field of machine learning for autonomous trajectory evaluation, and the components necessary to process the input information are already present. Furthermore, the combination would yield predictable results since both refer to mapping input feature sets to learned numeric scores. Regarding Claims 22 and 33, Wu further discloses the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, comprising: generating, based on the score, a state value describing a match between the test trajectory and the reference trajectory (See 0031, “[…] further computes a similarity score based on comparisons between the controlled trajectory and the distribution pattern of the plurality of driving demonstrations.” See also 0043, “[…] the similarity score may be used to select or identify how well an algorithm performs and/or what parameters may be used to achieve certain performance of a given algorithm. In some cases, the similarity score may be used to improve subsequent iterations of the controlled trajectories.”). Claims 23-24, 32 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US Patent Pub. No. 2023/0406345 A1) in view of Beijbom et al. (US Patent No. 11,203,362 B1) as applied to claim 21 above, and further in view of Li et al. (US Patent Pub. No. 2018/0374359 A1), herein “Li”. Regarding Claims 23 and 32, Wu in view of Beijbom teaches the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, but does not explicitly teach wherein the match labels respectively indicate whether the sets of divergence values received as inputs correspond to matched pairs of trajectories. Li, in a similar field of endeavor, teaches the match labels respectively indicate whether the sets of divergence values received as inputs correspond to matched pairs of trajectories (See 0025, “The similarity score represents the difference or similarity between a trajectory represented by the predicted features and an actual trajectory […] similarity score closer to a first predetermined value (e.g., 1) indicates that the predicted trajectory is similar to the corresponding actual trajectory. A similarity score closer to a second predetermined value (e.g., −1) indicates that the predicted trajectory is dissimilar to the corresponding actual trajectory. Using a large amount of predicted trajectories and corresponding actual trajectories, the DNN model or models can be trained more accurately.” Examiner notes the model is trained on trajectory pairs with known matches (predicted vs. actual) to output similarity). In view of Li’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method and system for generating multiple trajectories for comparison to determine a similarity score representing divergence between two trajectories and training a machine-learned model using relevant matching information as taught by Wu in view of Beijbom, the labels indicating a correspondence between pairs of trajectories, with a reasonable expectation of success, since both inventions operate in the same field of machine learning for autonomous trajectory evaluation, and the components necessary to process the input information are already present. Furthermore, both inventions address scoring functions that output numerical evaluations from trajectory comparisons using machine learning, so integrating the correspondence of the trajectory pair(s) that are already being compared is a natural integration into a learned trajectory similarity model. Regarding Claims 24 and 34, Wu further discloses the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, comprising: based on the score indicating a valid match between the test trajectory and the reference trajectory, deploying the autonomous vehicle motion planning system to an autonomous vehicle control system for an autonomous vehicle (See 0040, “[…] controlled trajectory is generated by a planning module of an ADV to drive autonomously in the scenario. For example, the controlled trajectory may include a planned behavior of the ADV and control input (e.g., speeds and steering controls) needed to realize such a planned behavior.” See also 0043, “[…] similarity score may be used to select or identify how well an algorithm performs and/or what parameters may be used to achieve certain performance of a given algorithm. In some cases, the similarity score may be used to improve subsequent iterations of the controlled trajectories.”). Claims 25-26, 28, 35-36 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US Patent Pub. No. 2023/0406345 A1) in view of Beijbom et al. (US Patent No. 11,203,362 B1) as applied to claim 21 above, and further in view of Patel et al. (US Patent Pub. No. 2023/0174103 A1), herein “Patel”. Regarding Claims 25 and 35, Wu in view of Beijbom teaches the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, but does not explicitly teach the method comprising: weighting a contribution of at least one divergence value of the plurality of divergence values using a context value obtained using a context metric, wherein the context metric measures an interval between: a first time; and a second time associated with the at least one divergence value. Patel, in a similar field of endeavor, teaches the method comprising: weighting a contribution of at least one divergence value of the plurality of divergence values using a context value obtained using a context metric (See 0079, “[…] score associated with an environmental policy preferably corresponds to a probability of that (candidate) policy being implemented by the environmental agent, which is effectively integrated into a weight […]” See also 0088, “[…] feasibility score can be determined for a prospective (future) time period, over which the agent motion is predicted/simulated, which can be a predetermined prospective prediction period (e.g., 3 seconds, 5 seconds, etc.), a period associated with a predetermined traversal distance (e.g., loom), a dynamically sized window (e.g., as a function of speed, context, etc.), and/or any other suitable time period or prediction window.” See also 0105, “[…] the evaluation is weighted based on the aggregate score of each environmental policy of the set.”), wherein the context metric measures an interval between: a first time (See 0088 as referenced above); and a second time associated with the at least one divergence value (See 0088 as referenced above). In view of Patel’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method and system for generating multiple trajectories for comparison to determine a similarity score representing divergence between two trajectories as taught by Wu in view of Beijbom, weighting divergence values and applying context metrics and domains for aggregation via a machine-learned model with respect to intervals of time, with a reasonable expectation of success, since both inventions are directed to validation and evaluation of autonomous vehicle trajectories and address the solution to the problem of determining how closely generated trajectories align with a reference under varying contextual conditions. Furthermore, incorporating context-aware weights into the existing divergence computation techniques would predictably improve validation accuracy by emphasizing divergence values occurring during more critical temporal intervals. Regarding Claims 26 and 36, Wu in view of Beijbom teaches the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, but does not explicitly teach the method comprising: determining, using a context value obtained using a context metric based on an attribute of the test trajectory or the reference trajectory, a context domain for at least one divergence value of the plurality of divergence values; and weighting the at least one divergence value based on a weighting parameter associated with the context domain. Patel, in a similar field of endeavor, teaches the method comprising: determining, using a context value obtained using a context metric based on an attribute of the test trajectory or the reference trajectory, a context domain for at least one divergence value of the plurality of divergence values (See 0079, 0088 and 0105 as referenced above); and weighting the at least one divergence value based on a weighting parameter associated with the context domain (See 0079, 0088 and 0105 as referenced above). In view of Patel’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method and system for generating multiple trajectories for comparison to determine a similarity score representing divergence between two trajectories as taught by Wu in view of Beijbom, weighting divergence values and applying context metrics and domains for aggregation via a machine-learned model with respect a parameter of the domain, with a reasonable expectation of success, since both inventions are directed to validation and evaluation of autonomous vehicle trajectories and address the solution to the problem of determining how closely generated trajectories align with a reference under varying contextual conditions. Furthermore, incorporating context-aware weights, specifically with respect to time, into the existing divergence computation techniques would predictably improve validation accuracy by emphasizing divergence values occurring during more critical temporal intervals. Regarding Claims 28 and 38, Wu in view of Beijbom teaches the computer-implemented method of claim 21 and autonomous vehicle control system of claim 31, but does not explicitly teach wherein the score comprises a weighted combination of the plurality of divergence values, the weighted combination weighted by learnable parameters of the machine-learned model. Patel, in a similar field of endeavor, teaches the score comprises a weighted combination of the plurality of divergence values, the weighted combination weighted by learnable parameters of the machine-learned model (See 0079, 0088 and 0105 as referenced above. See also 0098, “[…] additionally or alternatively include calculating any other scores or combination of scores for use in evaluating the set of policies. S300 can optionally include aggregating (e.g., multiplying, averaging, calculating a median, etc.) any or all of the scores calculated in S300, such as aggregating any or all of: a future feasibility score, a historical score, and a prior(s) scores […]”). In view of Patel’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method and system for generating multiple trajectories for comparison to determine a similarity score representing divergence between two trajectories as taught by Wu in view of Beijbom, weighting divergence values and applying context metrics and domains for aggregation via a machine-learned model with respect a parameter of the domain in a combination, with a reasonable expectation of success, since both inventions are directed to validation and evaluation of autonomous vehicle trajectories and address the solution to the problem of determining how closely generated trajectories align with a reference under varying contextual conditions. Furthermore, incorporating context-aware weights, specifically with respect to multiple intervals of time, into the existing divergence computation techniques would predictably improve validation accuracy by emphasizing divergence values occurring during more critical temporal intervals. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bryant Tang whose telephone number is (571)270-0145. The examiner can normally be reached M-F 8-5 CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRYANT TANG/Examiner, Art Unit 3658 /JASON HOLLOWAY/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Oct 24, 2024
Application Filed
Nov 21, 2024
Response after Non-Final Action
Feb 03, 2026
Non-Final Rejection — §103, §DP
Apr 10, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594942
Method and Apparatus for Detecting Complexity of Traveling Scenario of Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12594967
METHOD AND SYSTEM FOR ADDRESSING FAILURE IN AN AUTONOMOUS AGENT
2y 5m to grant Granted Apr 07, 2026
Patent 12583115
ENHANCED VISUAL FEEDBACK SYSTEMS, ENHANCED SKILL LIBRARIES, AND ENHANCED FUNGIBLE TOKENS FOR THE OPERATION OF ROBOTIC SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12558964
VEHICLE PROVIDING NOTIFICATION INFORMATION FOR SAFETY OF A USER
2y 5m to grant Granted Feb 24, 2026
Patent 12548450
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING VEHICLE CONTROL PROGRAM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
87%
With Interview (-3.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 61 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month