Prosecution Insights
Last updated: April 19, 2026
Application No. 18/120,325

GENERATING WORST-CASE CONSTRAINTS FOR AUTONOMOUS VEHICLE MOTION PLANNING

Non-Final OA §103§DP
Filed
Mar 10, 2023
Examiner
ALHARBI, ADAM MOHAMED
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Motional Ad LLC
OA Round
3 (Non-Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
91%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
554 granted / 630 resolved
+35.9% vs TC avg
Minimal +3% lift
Without
With
+2.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 630 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 10, 2025 has been entered. Status of Claims This Office Action is in response to the application filed on December 10, 2025. Claims 6, 10, 18, and 22 have been canceled. Claims 1-5, 7-9, 11, 13-17, 19-21 and 23 have been amended. Claims 1-5, 7-9, 11-17, 19-21, 23, and 24 are presently pending and are presented for examination. Response to Amendments In response to the Amendments dated December 10, 2025. Examiner withdraws the previous art rejections. Response to Arguments Applicant's arguments filed December 10, 2025 have been fully considered, but they are moot in view of the new ground(s) of rejections. Double Patenting Claims 1-4, 7, 13-16, and 24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 6, 12-13, and 17-20 of application No. 18/164,652 as follows: Current Application’s claims Application’s claims 1 1 and 2 2 2 3 3 4 4 7 6 13 17 and 18 14 18 15 19 16 20 24 12 and 13 Although the claims at issue are not identical, they are not patentably distinct from each other because the current claims additionally recite generating a most constraining worst-case homotopy corresponding to a nominal homotopy based on worst-case trajectories of agents and generating control signals for operating the vehicle based on the nominal homotopy and the corresponding most constraining worst-case homotopy. The reference(s) relied upon in the §103 rejection below teach the additionally recited limitations, and therefore it would have been obvious to modify the reference application to include such limitations. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-l.jsp. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to ATA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically discloses as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-24 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20210108936 (hereinafter,"Seegmiller"; newly of record) in view of U.S. Pub. No. 20210031760 (hereinafter,"Ostafew"; previously of record). Regarding claim 1, Seegmiller discloses a method comprising: generating, with at least one processor, nominal homotopies for a nominal vehicle scenario, the homotopies generated based on nominal trajectories of agents proximate to the vehicle (“performing topological planning to identify on or more topologically distinct classes of trajectories, wherein each of the one or more topologically distinct classes is associated with a plurality of trajectories that take the same combination of discrete actions with respect to objects in the local region” (claim 1), “The trajectories 431 a-n in FIG. 4C are homotopic, meaning they can be continuously deformed into one another without encroaching on any obstacles” (para 0051), Fig. 4C, #431a-n, and Fig. 4D, #432a-n); generating, with the at least one processor, … scenarios for the nominal vehicle scenario based on map information and perception information (“The prediction subsystem 123 may predict the future locations, trajectories, and/or actions of the objects based at least in part on perception information (e.g., the state data for each object) received from the perception subsystem 122, the location information received from the location subsystem 121, the sensor data, and/or any other data that describes the past and/or current state of the objects, the autonomous vehicle 101, the surrounding environment, and/or their relationship(s)” (para 0033)); generating, with the at least one processor, a most constraining worst-case homotopy corresponding to each nominal homotopy based on the worst-case trajectories for the agents; selecting, with the at least one processor, a nominal homotopy from the nominal homotopies (“The motion planning subsystem 124 may generate the trajectory by performing topological planning to generate a set of constraints for each of a plurality of topologically distinct classes of trajectories, optimizing a single candidate trajectory for each class, and scoring the candidate trajectories to select an optimal trajectory. Topological classes are distinguished by the discrete actions taken with respect obstacles or restricted map areas” (para 0037)“Topological planning ensures the system can generate a trajectory for every topologically distinct class…ensures just one trajectory is optimized and scored for each topologically distinct class” (para 0073)). generating, with the at least one processor, at least one control signal to operate the vehicle based on the selected nominal homotopy and corresponding most constraining worst-case homotopy (“the motion planning subsystem 124 also plans a trajectory (“trajectory generation”) for the autonomous vehicle 101 to travel on a given route (e.g., a nominal route generated by the routing module 112(b)). The trajectory specifies the spatial path for the autonomous vehicle as well as a velocity profile. The controller converts the trajectory into control instructions for the vehicle control system, including but not limited to throttle/brake and steering wheel angle commands” (para 0036)). However, Seegmiller does not explicitly teach … a set of worst-case scenarios; for each worst-case scenario, generating, with the at least one processor, worst-case trajectories for the agents; Ostafew, in the same field of endeavor, teaches … a set of worst-case scenarios (“Observation or prediction uncertainty can arise due to the sensor data themselves, classification uncertainty, hypotheses (intention) uncertainty, actual indecision, occlusions, other reasons for the uncertainty, or a combination thereof. For example, with respect to the sensor data themselves, the sensor data can be affected by weather conditions, accuracy of the sensors, and/or faults in the sensors; with respect to classification uncertainty, a world object may be classified as a car, a bike, a pedestrian, etc., when in fact it is some other class of object; with respect to intentions estimation, it may not be known whether a road user is turning left or going straight; with respect to actual indecision, a road user can actually change its mind unexpectedly; with respect to occlusions, the sensors of the AV may not be able to detect objects that are behind other objects” (para 0041)); for each worst-case scenario, generating, with the at least one processor, worst-case trajectories for the agents (“The AV 302 can include a world modeling module, which can track at least some detected external objects. The world modeling module can predict one or more potential hypotheses (i.e., trajectories, paths, or the like) for each tracked object of at least some of the tracked objects” (para 0085)); One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to predict one or more potential hypotheses and generate (considering an initial state, desired actions, and at least some tracked objects with predicted trajectories) a collision-avoiding, law-abiding, comfortable response (e.g., trajectory, path, etc.); see Ostafew at least at [0085]. Regarding claim 2, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach further comprising: adding to the agents at least one hallucinated agent in an occluded part of a map based on the map information and the perception information. Ostafew, in the same field of endeavor, teaches further comprising: adding to the agents at least one hallucinated agent in an occluded part of a map based on the map information and the perception information (“a scene 2400 that includes occluded objects. The scene 2400 illustrates that an AV 2402 is traveling on a road 2404 that curves. The road 2404 includes a hazard 2406 (or equivalently, a location that might include a hazard). The hazard 2406 can be identified using HD map data, such as the HD map data 510 of FIG. 10. The hazard 2406 can be, for example, a crosswalk, a stop sign, a traffic signal, or some type of hazard that would necessitate that the AV 2402 perform a maneuver to avoid colliding with, or with an object at, the hazard 2406. In the scene 2400, the hazard 2406 is shown as being a crosswalk” (para 0309)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order for a hazard to be identified using HD map data and the AV to perform a maneuver to avoid colliding with, or with an object; see Ostafew at least at [0309]. Regarding claim 3, Seegmiller discloses the method of claim 1. Additionally Seegmiller discloses wherein the worst-case scenarios are associated with a location of the vehicle relative to the map (“Topological classes are distinguished by the discrete actions taken with respect obstacles or restricted map areas. Specifically, all possible trajectories in a topologically distinct class takes the same action with respect to obstacles or restricted map areas. Obstacles may include, for example, static objects such as traffic cones and bollards, or other road users such as pedestrians, cyclists, and cars. Restricted map areas may include, for example, crosswalks and intersections” (para 0037)). Regarding claim 4, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach wherein the -worst-case scenarios are semantics related worst-case scenarios. Ostafew, in the same field of endeavor, teaches wherein the -worst-case scenarios are semantics related worst-case scenarios (“with respect to classification uncertainty, a world object may be classified as a car, a bike, a pedestrian, etc., when in fact it is some other class of object; with respect to intentions estimation, it may not be known whether a road user is turning left or going straight; with respect to actual indecision, a road user can actually change its mind unexpectedly; with respect to occlusions, the sensors of the AV may not be able to detect objects that are behind other objects” (para 0041)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to provide trajectory planning that is based on false positives, noise in the sensor data, and other uncertainties; see Ostafew at least at [0041]. Regarding claim 5, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach wherein the worst-case scenarios are assumed worst-case scenarios for specified agents. Ostafew, in the same field of endeavor, teaches wherein the worst-case scenarios are assumed worst-case scenarios for specified agents (“maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities)” (para 0096) and “A hazard associated with a perceived object can be detected based on one or more hypotheses that are maintained for the object. A hypothesis can be based on HD map data, social or driving behaviors, or any other criteria” (para 0294)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to create and maintain predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis; see Ostafew at least at [0096]. Regarding claim 7, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on a prediction of how far and where the agent will travel in a specified amount of time Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on a prediction of how far and where the agent will travel in a specified amount of time (“The size of the bounding box of uncertainty may be a function of one or more of range uncertainty, angle (i.e., pose, orientation, etc.) uncertainty, or velocity uncertainly results. For example, with respect to range uncertainly (i.e., uncertainty with respect to how far the object 1905 is from the AV 1902)...there may be uncertainly with respect to the intention of the object 1905 and/or speed of the object 1905. For example, it may not be clear whether the object 1905 will remain parked or suddenly move into the lane 1904” (para 0258)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to provide trajectory planning that is based on false positives, noise in the sensor data, and other uncertainties; see Ostafew at least at [0041]. Regarding claim 8, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on a velocity or acceleration profile of the agent. Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on a velocity or acceleration profile of the agent (“The world model module 402 fuses sensor information, tracks objects, maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities)” (para 0096)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to create and maintain predicted trajectories for each hypothesis; see Ostafew at least at [0096]. Regarding claim 9, Seegmiller discloses the method of claim 1. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on lane graph parameters. Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on lane graph parameters (“As such, the driveline concatenation module 522 determines the geometry of the lanes in order to determine the driveline given the lane geometry (e.g., the lane width)” (para 0128)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to determine the driveline given the lane geometry; see Ostafew at least at [0128]. Regarding claim 11, Seegmiller discloses the method of claim 1. Additionally, Seegmiller discloses wherein generating at least one control signal comprises: inputting the selected nominal homotopy and corresponding most constraining worst-case homotopy into a model-based predictive control (MPC); and generating the at least one control signal based on a solution output by the MPC (“the system may optimize a trajectory for each constraint set to determine a candidate trajectory for each topologically distinct class. This optimization may be performed using model-predictive control or another algorithm, to generate a dynamically feasible and comfortable trajectory that satisfies the constraint set” (para 0069)). Regarding claim 12, Seegmiller discloses the method of claim 11. Additionally, Seegmiller discloses further comprising generating the at least one control signal based on an optimization of at least one cost function and the constraint sets (“Heuristic cost may be used to prioritize constraint sets inputs to trajectory optimization when computation time and resources are limited. For example, in FIG. 4H the constraint sets that pass object 403 on the left might be pruned or assigned lower priority because doing so would cause an undesirable violation of the left lane boundary 406” (para 0067)). Regarding claim 13, Seegmiller discloses a system comprising: at least one processor; memory storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations (Fig. 7, #705 and #725“An “electronic device” or a “computing device” refers to a device that includes a processor and memory…The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions” (para 0106)) comprising: generating nominal homotopies for a nominal vehicle scenario, the nominal homotopies generated based on nominal trajectories of agents proximate to the vehicle (“performing topological planning to identify on or more topologically distinct classes of trajectories, wherein each of the one or more topologically distinct classes is associated with a plurality of trajectories that take the same combination of discrete actions with respect to objects in the local region” (claim 1), “The trajectories 431 a-n in FIG. 4C are homotopic, meaning they can be continuously deformed into one another without encroaching on any obstacles” (para 0051), Fig. 4C, #431a-n, and Fig. 4D, #432a-n); generating … scenarios for the nominal vehicle scenario based on map information and perception information (“The prediction subsystem 123 may predict the future locations, trajectories, and/or actions of the objects based at least in part on perception information (e.g., the state data for each object) received from the perception subsystem 122, the location information received from the location subsystem 121, the sensor data, and/or any other data that describes the past and/or current state of the objects, the autonomous vehicle 101, the surrounding environment, and/or their relationship(s)” (para 0033)); generating a most constraining worst-case homotopy corresponding to each nominal homotopy based on the worst-case trajectories for the agents; selecting a nominal homotopy from the nominal homotopies (“The motion planning subsystem 124 may generate the trajectory by performing topological planning to generate a set of constraints for each of a plurality of topologically distinct classes of trajectories, optimizing a single candidate trajectory for each class, and scoring the candidate trajectories to select an optimal trajectory. Topological classes are distinguished by the discrete actions taken with respect obstacles or restricted map areas” (para 0037)“Topological planning ensures the system can generate a trajectory for every topologically distinct class…ensures just one trajectory is optimized and scored for each topologically distinct class” (para 0073)). generating at least one control signal to operate the vehicle based on the selected nominal homotopy and corresponding most constraining worst-case homotopy (“the motion planning subsystem 124 also plans a trajectory (“trajectory generation”) for the autonomous vehicle 101 to travel on a given route (e.g., a nominal route generated by the routing module 112(b)). The trajectory specifies the spatial path for the autonomous vehicle as well as a velocity profile. The controller converts the trajectory into control instructions for the vehicle control system, including but not limited to throttle/brake and steering wheel angle commands” (para 0036)). However, Seegmiller does not explicitly teach … a set of worst-case scenarios; for each worst-case scenario, generating worst-case trajectories for the agents; Ostafew, in the same field of endeavor, teaches … a set of worst-case scenarios (“Observation or prediction uncertainty can arise due to the sensor data themselves, classification uncertainty, hypotheses (intention) uncertainty, actual indecision, occlusions, other reasons for the uncertainty, or a combination thereof. For example, with respect to the sensor data themselves, the sensor data can be affected by weather conditions, accuracy of the sensors, and/or faults in the sensors; with respect to classification uncertainty, a world object may be classified as a car, a bike, a pedestrian, etc., when in fact it is some other class of object; with respect to intentions estimation, it may not be known whether a road user is turning left or going straight; with respect to actual indecision, a road user can actually change its mind unexpectedly; with respect to occlusions, the sensors of the AV may not be able to detect objects that are behind other objects” (para 0041)); for each worst-case scenario, generating worst-case trajectories for the agents (“The AV 302 can include a world modeling module, which can track at least some detected external objects. The world modeling module can predict one or more potential hypotheses (i.e., trajectories, paths, or the like) for each tracked object of at least some of the tracked objects” (para 0085)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to predict one or more potential hypotheses and generate (considering an initial state, desired actions, and at least some tracked objects with predicted trajectories) a collision-avoiding, law-abiding, comfortable response (e.g., trajectory, path, etc.); see Ostafew at least at [0085]. Regarding claim 14, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach further comprising: add to the agents at least one hallucinated agent in an occluded part of a map based on the map information and the perception information. Ostafew, in the same field of endeavor, teaches further comprising: add to the agents at least one hallucinated agent in an occluded part of a map based on the map information and the perception information (“a scene 2400 that includes occluded objects. The scene 2400 illustrates that an AV 2402 is traveling on a road 2404 that curves. The road 2404 includes a hazard 2406 (or equivalently, a location that might include a hazard). The hazard 2406 can be identified using HD map data, such as the HD map data 510 of FIG. 10. The hazard 2406 can be, for example, a crosswalk, a stop sign, a traffic signal, or some type of hazard that would necessitate that the AV 2402 perform a maneuver to avoid colliding with, or with an object at, the hazard 2406. In the scene 2400, the hazard 2406 is shown as being a crosswalk” (para 0309)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order for a hazard to be identified using HD map data and the AV to perform a maneuver to avoid colliding with, or with an object; see Ostafew at least at [0309]. Regarding claim 15, Seegmiller discloses the system of claim 13. Additionally Seegmiller discloses wherein the worst-case scenarios are associated with a location of the vehicle relative to the map (“Topological classes are distinguished by the discrete actions taken with respect obstacles or restricted map areas. Specifically, all possible trajectories in a topologically distinct class takes the same action with respect to obstacles or restricted map areas. Obstacles may include, for example, static objects such as traffic cones and bollards, or other road users such as pedestrians, cyclists, and cars. Restricted map areas may include, for example, crosswalks and intersections” (para 0037)). Regarding claim 16, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach wherein the worst-case scenarios are semantics related worst-case scenarios. Ostafew, in the same field of endeavor, teaches wherein the worst-case scenarios are semantics related worst-case scenarios (“with respect to classification uncertainty, a world object may be classified as a car, a bike, a pedestrian, etc., when in fact it is some other class of object; with respect to intentions estimation, it may not be known whether a road user is turning left or going straight; with respect to actual indecision, a road user can actually change its mind unexpectedly; with respect to occlusions, the sensors of the AV may not be able to detect objects that are behind other objects” (para 0041)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to provide trajectory planning that is based on false positives, noise in the sensor data, and other uncertainties; see Ostafew at least at [0041]. Regarding claim 17, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach wherein the worst-case scenarios are assumed worst-case scenarios for specified agents. Ostafew, in the same field of endeavor, teaches wherein the worst-case scenarios are assumed worst-case scenarios for specified agents (“maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities)” (para 0096) and “A hazard associated with a perceived object can be detected based on one or more hypotheses that are maintained for the object. A hypothesis can be based on HD map data, social or driving behaviors, or any other criteria” (para 0294)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to create and maintain predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis; see Ostafew at least at [0096]. Regarding claim 19, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on a prediction of how far and where the agent will travel in a specified amount of time Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on a prediction of how far and where the agent will travel in a specified amount of time (“The size of the bounding box of uncertainty may be a function of one or more of range uncertainty, angle (i.e., pose, orientation, etc.) uncertainty, or velocity uncertainly results. For example, with respect to range uncertainly (i.e., uncertainty with respect to how far the object 1905 is from the AV 1902)” (para 0258)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to provide trajectory planning that is based on false positives, noise in the sensor data, and other uncertainties; see Ostafew at least at [0041]. Regarding claim 20, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on a velocity or acceleration profile of the agent. Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on a velocity or acceleration profile of the agent (“The world model module 402 fuses sensor information, tracks objects, maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities)” (para 0096)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to create and maintain predicted trajectories for each hypothesis; see Ostafew at least at [0096]. Regarding claim 21, Seegmiller discloses the system of claim 13. However, Seegmiller does not explicitly teach wherein the worst-case trajectory of each agent is based on lane graph parameters. Ostafew, in the same field of endeavor, teaches wherein the worst-case trajectory of each agent is based on lane graph parameters (“As such, the driveline concatenation module 522 determines the geometry of the lanes in order to determine the driveline given the lane geometry (e.g., the lane width)” (para 0128)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Seegmiller with the teachings of Ostafew in order to determine the driveline given the lane geometry; see Ostafew at least at [0128]. Regarding claim 23, Seegmiller discloses the system of claim 13. Additionally, Seegmiller discloses wherein generating at least one control signal comprises: inputting the selected nominal homotopy and corresponding most constraining worst-case homotopy into a model-based predictive control (MPC); and generating the at least one control signal based on a solution output by the MPC (“the system may optimize a trajectory for each constraint set to determine a candidate trajectory for each topologically distinct class. This optimization may be performed using model-predictive control or another algorithm, to generate a dynamically feasible and comfortable trajectory that satisfies the constraint set” (para 0069)). Regarding claim 24, Seegmiller discloses the system of claim 23. Additionally, Seegmiller discloses further comprising generating the at least one control signal based on an optimization of at least one cost function and the constraint sets (“Heuristic cost may be used to prioritize constraint sets inputs to trajectory optimization when computation time and resources are limited. For example, in FIG. 4H the constraint sets that pass object 403 on the left might be pruned or assigned lower priority because doing so would cause an undesirable violation of the left lane boundary 406” (para 0067)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM ALHARBI whose telephone number is (313)446-6621. The examiner can normally be reached on M-F 11:00AM – 7:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached on (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM M ALHARBI/Primary Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Mar 10, 2023
Application Filed
Dec 28, 2024
Non-Final Rejection — §103, §DP
Apr 11, 2025
Response Filed
Aug 18, 2025
Final Rejection — §103, §DP
Dec 10, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583435
TECHNIQUES FOR MANAGING POWER DISTRIBUTION BETWEEN ELECTRIFIED VEHICLE LOADS AND HIGH VOLTAGE BATTERY SYSTEM DURING LOW STATE OF CHARGE CONDITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12553731
ARRIVAL PREDICTIONS BASED ON DESTINATION SPECIFIC MODEL
2y 5m to grant Granted Feb 17, 2026
Patent 12548446
COLLISION WARNING SYSTEM AND METHOD FOR A VEHICLE
2y 5m to grant Granted Feb 10, 2026
Patent 12509218
FLIGHT CONTROL FOR AN UNMANNED AERIAL VEHICLE
2y 5m to grant Granted Dec 30, 2025
Patent 12504286
SIMULTANEOUS LOCATION AND MAPPING (SLAM) USING DUAL EVENT CAMERAS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
91%
With Interview (+2.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 630 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month