Prosecution Insights
Last updated: April 19, 2026
Application No. 16/586,604

BLOCKING OBJECT AVOIDANCE

Non-Final OA §103§112
Filed
Sep 27, 2019
Examiner
SANTOS, AARRON EDUARDO
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Zoox Inc.
OA Round
5 (Non-Final)
45%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
58%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
59 granted / 131 resolved
-7.0% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
63 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114 was filed in this application after a decision by the Patent Trial and Appeal Board, but before the filing of a Notice of Appeal to the Court of Appeals for the Federal Circuit or the commencement of a civil action. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 09-29-2025 has been entered. Response to Amendment Claims 1, 3, 6, 8, 15-17, and 19 have been amended. There are no new claims. No claims have been canceled. The amendments submitted 11-07-2022 are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claims 1, 6, and 15 contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The added material which is not supported by the original disclosure is as follows: “non-zero threshold”. To the examiners best understanding, the applicants' specification does not support the amended claim language in the above claims. Applicant is required to cancel the new matter in the reply to this Office Action. Claims 2-5, 7-14, and 16-20 are rejected based upon their dependency to a rejected claim. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 6, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 6, and 15 are is rejected as failing to point out or define the claimed invention. The indefinite language or relationship is: “determine a likelihood that the object, over a portion of the predicted object trajectory associated with the object moving through the environment in a same direction of travel as the vehicle in a same lane as the vehicle, limits the first vehicle trajectory for a non-zero threshold period of time determine that the likelihood meets or exceeds a threshold likelihood determine a confidence associated with an accuracy of the predicted object trajectory”. The language as stated does not distinctly define what is meant by “determine a likelihood that the object, over a portion of the predicted object trajectory associated with the object moving through the environment in a same direction of travel as the vehicle in a same lane as the vehicle, limits the first vehicle trajectory for a non-zero threshold period of time determine that the likelihood meets or exceeds a threshold likelihood determine a confidence associated with an accuracy of the predicted object trajectory” or its essential quality, and does not clearly state the limitation of the claimed invention. Hereinafter “determine a likelihood that the object, over a portion of the predicted object trajectory associated with the object moving through the environment in a same direction of travel as the vehicle in a same lane as the vehicle, limits the first vehicle trajectory for a non-zero threshold period of time determine that the likelihood meets or exceeds a threshold likelihood determine a confidence associated with an accuracy of the predicted object trajectory” will be interpreted as “any duration of time greater than zero” or “waiting before performing an avoidance maneuver”. Claims 2-5, 7-14, and 16-20 are rejected based upon their dependency to a rejected claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-11, 15-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ostafew (US 20210031760 A1) in view of Maurer (US 20150210311 A1). REGARDING CLAIM 1, as best understood, a sensor (Ostafew: [0041] As trajectory planning depends on (e.g., is based on) sensor data. However, observation and/or prediction uncertainty may be associated with the sensor data and/or the processing of the sensor data. Observation or prediction uncertainty can arise due to the sensor data themselves, classification uncertainty, hypotheses (intention) uncertainty, actual indecision, occlusions, other reasons for the uncertainty, or a combination thereof. For example, with respect to the sensor data themselves, the sensor data can be affected by weather conditions, accuracy of the sensors, and/or faults in the sensors; with respect to classification uncertainty, a world object may be classified as a car, a bike, a pedestrian, etc., when in fact it is some other class of object; with respect to intentions estimation, it may not be known whether a road user is turning left or going straight; with respect to actual indecision, a road user can actually change its mind unexpectedly; with respect to occlusions, the sensors of the AV may not be able to detect objects that are behind other objects. As such, the planned trajectory may be based on false positives, noise in the sensor data, and other uncertainties); one or more processors (Ostafew: [0006] An aspect of the disclosed implementations is a system for contingency planning for an autonomous vehicle (AV). The system includes a memory and a processor. The memory includes instructions executable by the processor to detect a hazard object that does not intrude into a path of the AV at a time of the detecting the hazard object; determine a hazard zone for the hazard object; determine a time of arrival of the AV at the hazard zone; determine a contingency trajectory for the AV; control the AV according to the contingency trajectory; and in response to the hazard object intruding into the path of the AV, control the AV to perform a maneuver to avoid the hazard object. The contingency trajectory includes at least one of a lateral contingency or a longitudinal contingency); and memory (Ostafew: [0006] An aspect of the disclosed implementations is a system for contingency planning for an autonomous vehicle (AV). The system includes a memory and a processor. The memory includes instructions executable by the processor to detect a hazard object that does not intrude into a path of the AV at a time of the detecting the hazard object; determine a hazard zone for the hazard object; determine a time of arrival of the AV at the hazard zone; determine a contingency trajectory for the AV; control the AV according to the contingency trajectory; and in response to the hazard object intruding into the path of the AV, control the AV to perform a maneuver to avoid the hazard object. The contingency trajectory includes at least one of a lateral contingency or a longitudinal contingency) storing processor-executable instructions (Ostafew: [0006] An aspect of the disclosed implementations is a system for contingency planning for an autonomous vehicle (AV). The system includes a memory and a processor. The memory includes instructions executable by the processor to detect a hazard object that does not intrude into a path of the AV at a time of the detecting the hazard object; determine a hazard zone for the hazard object; determine a time of arrival of the AV at the hazard zone; determine a contingency trajectory for the AV; control the AV according to the contingency trajectory; and in response to the hazard object intruding into the path of the AV, control the AV to perform a maneuver to avoid the hazard object. The contingency trajectory includes at least one of a lateral contingency or a longitudinal contingency) that, when executed by the one or more processors, configure the vehicle to: receive sensor data of an environment from the sensor (Ostafew: [0032] The vehicle may include one or more sensors. Traversing the vehicle transportation network may include the sensors generating or capturing sensor data, such as data corresponding to an operational environment of the vehicle, or a portion thereof); identify an object at an object location in the environment based on the sensor data (Ostafew: [0044] To illustrate the contingency planning, a non-limiting example is now provided. Assume that based on sensor inputs, a parked vehicle is identified ahead of the AV and along the trajectory of the AV. The AV (i.e., a module therein) identifies that the door of the parked vehicle could open), wherein the object is currently blocking a first vehicle trajectory associated with the vehicle (Ostafew: [0154] when the adjusted drivable area, accounting for static objects, contains a static blockage, the process 800 adjusts the discrete-time speed plan such that the AV comes to a stop a prescribed distance before the static blockage); determine a predicted object trajectory associated with the object (Ostafew: [0155] At operation 850, the process 800 identifies (e.g., predicts, calculates, generates, receives, or otherwise identifies) a respective path for each of the nearby dynamic objects. In an example, the predictions of the respective paths (i.e., trajectories) of at least some of the dynamic objects can be maintained in a world model, such as the world model module 402 of FIG. 4. As such, the process 800 can receive (e.g., request, read, or otherwise receive) the respective paths from the world model. [0156] For example, the process 800 predicts (e.g., receives a prediction, or otherwise predicts) that the dynamic oncoming vehicle 918 is to follow a path 922 to get around the static vehicle 920, and that the dynamic vehicle 916 is to follow a path 924 after passing the static vehicle 914. In an implementation, the operation 820 uses an instance (i.e., an execution) of the process 800 to identify the path of a dynamic object. In an example, when predicting a path for a dynamic object, the process 800 excludes the AV from the list of nearby objects of the dynamic object. [0157] In an example, predicting a path for a dynamic object can be based on respective speeds of other dynamic objects and an estimation of the right of way amongst the dynamic objects. In an example of the estimation of the right of way, if a second vehicle is following (i.e., is behind) a first vehicle in a lane, then the first vehicle is simulated (i.e., a path is predicted for the first vehicle) in the presence of the second vehicle; but the second vehicle is simulated without the presence of the first vehicle), wherein the predicted object trajectory is based at least in part on a classification associated with the object (Ostafew: [0216] As described above, a second instance of the trajectory planner may be tracking (e.g., predicting the trajectory of) the vehicle 1404. As such, a module 532 of the second trajectory planner can determine (e.g., predict) the locations of the vehicle 1404. For example, at time t (e.g., in one second), the vehicle 1404 is determined to be at a location 1412; at time t+1 (e.g., in two seconds), the vehicle 1404 is determined to be at a location 1414; and at time t+2 (e.g., in three seconds), the vehicle 1404 is determined to be at a location 1416. In an example, the same time window and frequency of predictions can be the same for all instantiated trajectory planners of the AV. However, that need not be the case. The time window and frequency can depend on the type (e.g., bicycle, pedestrian, sports car, sedan, large truck, etc.) of the dynamic object; [0242] The trajectory planner (e.g., the coarse-driveline concatenation layer of the trajectory planner) determines an adjusted drivable area for the AV based on at least one or more of hard boundaries (which are set based on static and/or dynamic objects), soft boundaries (e.g., lane markings), filtered lateral limits, multi-hypothesis tracking, extendable drivable area checking, and dynamic object classification (e.g., classifying an object as an oncoming vehicle, a lead vehicle, or a lateral constraint)), the classification being based at least in part on object data (Ostafew: [0095] the state for an object can include zero or more of a velocity, a pose, a geometry (such as width, height, and depth), a classification (e.g., bicycle, large truck, pedestrian, road sign, etc.), and a location. As such, the state of an object includes discrete state information (e.g., classification) and continuous state information (e.g., pose and velocity)) comprising at least one of an object size, the object location, or an object action (Ostafew: [0095] the state for an object can include zero or more of a velocity, a pose, a geometry (such as width, height, and depth), a classification (e.g., bicycle, large truck, pedestrian, road sign, etc.), and a location. As such, the state of an object includes discrete state information (e.g., classification) and continuous state information (e.g., pose and velocity)); determine a likelihood (Ostafew: [0043] To mitigate issues associated with perceived hazards, perceived world objects, and/or anticipate intentions of the perceived world objects, implementations accordingly to this disclosure can use contingency planning to robustly handle uncertainty. Contingency planning can guarantee the safety of an AV while simultaneously maintaining the comfort of an occupant of the AV. Contingency planning allows a planning system (such as a trajectory planning system) of an AV to partially defer action with respect to uncertain observations or predictions. In an example, how much of an action can be deferred can be based on the likelihood of said observations or predictions and/or the emergency maneuver capabilities of the AV) that the object, over a portion of the predicted object trajectory associated with the object moving through the environment in a same direction of travel (Ostafew: [0060] obtain information that represents, for example, a current heading of the vehicle 100; [0090] The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory) as the vehicle in a same lane as the vehicle (Ostafew: [0035] An external object can be a dynamic (i.e., moving) object, such as a pedestrian, a remote vehicle, a motorcycle, a bicycle, etc. The dynamic object can be oncoming (toward the vehicle) or can be moving in the same direction as the vehicle. The dynamic object can be moving longitudinally or laterally with respect to the vehicle. A static object can become a dynamic object, and vice versa; [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(344)]), limits the first vehicle trajectory for a non-zero threshold period of time (Ostafew: [ABS] determining a time of arrival of the AV at the hazard zone; determining a contingency trajectory for the AV; controlling the AV according to the contingency trajectory; and, in response to the hazard object intruding into the path of the AV, controlling the AV to perform a maneuver to avoid the hazard object. The contingency trajectory includes at least one of a lateral contingency or a longitudinal contingency. The contingency trajectory is determined using the time of arrival of the AV at the hazard zone); determine that the likelihood meets or exceeds a threshold likelihood (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle; [0096] The world model module 402 fuses sensor information, tracks objects, maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities). In an example, the world model module 402 uses an instance of the trajectory planner to generate the predicted trajectories for each object hypothesis for at least some of the dynamic objects. For example, an instance of the trajectory planner can be used to generate predicted trajectories for vehicles, bicycles, and pedestrians. In another example, an instance of the trajectory planner can be used to generate predicted trajectories for vehicles and bicycles, and a different method can be used to generate predicted trajectories for pedestrians; [0100], [0318], [0323]); determine a confidence associated with an accuracy of the predicted object trajectory (Ostafew: [0096] The world model module 402 fuses sensor information, tracks objects, maintains lists of hypotheses for at least some of the dynamic objects (e.g., an object A might be going straight, turning right, or turning left), creates and maintains predicted trajectories for each hypothesis, and maintains likelihood estimates of each hypothesis (e.g., object A is going straight with probability 90% considering the object pose/velocity and the trajectory poses/velocities). In an example, the world model module 402 uses an instance of the trajectory planner to generate the predicted trajectories for each object hypothesis for at least some of the dynamic objects. For example, an instance of the trajectory planner can be used to generate predicted trajectories for vehicles, bicycles, and pedestrians. In another example, an instance of the trajectory planner can be used to generate predicted trajectories for vehicles and bicycles, and a different method can be used to generate predicted trajectories for pedestrians; [0100], [0318], [0323]); determine, based at least in part on the confidence, a modified drivable area associated with navigating the environment (Ostafew: [0047] The advantages of contingency planning can include 1) guaranteeing the safety of the AV regardless of observation and prediction uncertainty, 2) maintaining comfort of an occupant and producing socially acceptable behavior of the AV, 3) providing measured responses based on observation and/or prediction likelihood, and 4) reducing sensor requirements. For example, with respect to producing socially acceptable behavior, the AV need not slow down drastically for every possible interaction. For example, with respect to reducing sensor requirements, because an AV can adjust its responses based on measurement uncertainty, the cost of sensors in the AV can be reduced. For example, a typical AV may include more than 15 cameras, seven LiDAR units, six radar units, 2 GPS receivers, more than four graphics processing units (GPUs), and more than nine processing units. Contrastingly, an AV, which employs contingency planning as described herein so that the planning modules of the AV are more tolerant to uncertainty; [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]); determine a second vehicle trajectory based on the predicted object trajectory (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]), the modified drivable area (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]), and the confidence (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]), wherein the second vehicle trajectory is associated with the vehicle navigating around the object (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]); and control the vehicle according to the second vehicle trajectory (Ostafew: [0090] In the situation 340, the tracking component of the AV 302 can detect a parked vehicle 342 (i.e., a static object) and a bicycle 344 that is moving (i.e., a dynamic object that is a longitudinal constraint). The prediction component may determine, with a certain degree of certainty, that the bicycle 344 will follow a trajectory 346 to get around the parked vehicle 342. As such, the AV 302 determines (i.e., plans, calculates, selects, generates, or otherwise determines) a trajectory 348 such that the AV 302 stops after a certain distance to allow the bicycle 344 to pass the parked vehicle 342. In another example, the AV 302 can determine more than one possible trajectory. For example, the AV 302 can determine a first trajectory as described above, a second trajectory whereby the AV 302 accelerates to pass the bicycle 344 before the bicycle 344 passes the parked car, and a third trajectory whereby the AV 302 passes around the bicycle 344 as the bicycle 344 is passing the parked vehicle 342. The trajectory planner then selects one of the determined possible trajectories; [FIG. 3(348, 366)]; [0262-0264]). To the examiner’s best understanding, Ostafew discloses “limits the first vehicle trajectory for a non-zero threshold period of time” [ABS]. However, should it be found Ostafew fails to disclose, limits the first vehicle trajectory for a non-zero threshold period of time, in the same field of endeavor, Maurer discloses, limits the first vehicle trajectory for a non-zero threshold period of time (Maurer: [0045] In addition, FIG. 2 schematically shows sojourn probability distribution P.sub.FG(t.sub.1) of pedestrian FG at time t.sub.1, as well as sojourn probability distribution P.sub.FG(t.sub.2) at time t.sub.2, as calculated in method step S04. Since, given a present speed of vehicle F that is part of the second current state of motion of vehicle F, as the forward travel of vehicle S continues the unmodified trajectory T' will intersect at time t.sub.2, in a large region, with the calculated sojourn probability distribution P.sub.FG(t.sub.2) of pedestrian FG at time t.sub.2, trajectory T is calculated as shown in FIG. 2. Here, an intersection between trajectory T and calculated sojourn probability distribution P.sub.FG(t.sub.2) of pedestrian FG at time t.sub.2 becomes smaller. In the calculation of trajectory T, a maximum permitted collision probability can be specified, such that a collision probability for vehicle F on trajectory T must be less than or equal to this maximum collision probability), for the benefit of minimizing collision probability. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ostafew to include a plurality of threshold times before performing an avoidance maneuver taught by Maurer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to minimize collision probability. REGARDING CLAIM 2, as best understood, Ostafew, as modified, remain as applied above to claim 1, and further, Maurer also discloses, determine a first distance between a lane marker depicting an edge of a lane and the location associated with the object (Maurer: [FIG. 2] determine a first distance between a lane marker depicting an edge of a lane (T') and the location associated with the object (X_2) can be observed.); determine a second distance comprising the first distance minus a width of the vehicle (Maurer: [FIG. 2] determine a second distance comprising the first distance minus a width of the vehicle can be observed, in that, T-T' is the vehicle width. Interpreted as capable of the intended use of "determine a second distance comprising the first distance minus a width of the vehicle".); and determine a vehicle speed associated with the second distance (Maurer: [0034]; [0045]); wherein the second vehicle trajectory is associated with navigating around the object at the second distance and the vehicle speed (Maurer: [0034]; [0045]). Maurer does not explicitly recite the terminology "and determine a vehicle speed associated with the distance". However, Maurer discloses a second state of motion, orientation, speed, and acceleration of a vehicle, and time to collision with pedestrian. It is the examiners assertion that time-to-collision teaches "speed associated with distance". Thus, teaching "determine a vehicle speed associated with the second distance". Additionally, duplication (or repeating) of essential parts/steps is of ordinary skill in the art and does not set apart the claimed subject matter from the prior art. REGARDING CLAIM 3, as best understood, Ostafew, as modified, remain as applied above to claim 1, and further, Maurer also discloses, determine a classification associated with the object (Maurer: [0030]; see FIG. 5; [0037]); and determine a threshold distance between the vehicle (Maurer: [FIG. 4] determine a threshold distance between the vehicle (P_GC,ij) can be observed.) and the location associated with the object based on the classification (Maurer: [FIG. 4] In figure 4, "FG" and "P_GV,ij" are interpreted as location associated with the object based at least in part on the classification.), wherein the second vehicle trajectory is based on the threshold distance (Maurer: [FIG. 4] the second vehicle trajectory is based at least in part on the threshold distance can be observed.). REGARDING CLAIM 5, as best understood, Ostafew, as modified, remain as applied above to claim 1, and further, Maurer also discloses, the predicted object trajectory is based on at least one of: a machine learned algorithm; a top-down representation of the environment; a discretized probability distribution; a temporal logic formula; or a tree search method (Maurer: [ABS] A method is provided for operating a driver assistance system, and a driver assistance system. The method includes the steps: determination of a current position of a pedestrian in an environment surrounding the vehicle; determination of a first current state of motion of the pedestrian; determination of a second current state of motion of the vehicle; calculation of a sojourn probability distribution of the pedestrian, the sojourn probability distribution being a function of time and of space and being based on a pedestrian motion model in connection with the determined current position of the pedestrian and the determined current state of motion of the pedestrian; calculation of a trajectory, based on the calculated sojourn probability distribution of the pedestrian and on the second current state of motion of the vehicle, having a minimum collision probability for the vehicle and the pedestrian; and operation of the driver assistance system of the vehicle based on the calculated trajectory.). The above description is interpreted as top-down or stepwise, which are parallel teachings. REGARDING CLAIM 6, Ostafew discloses, identifying an object at a location in an environment based on sensor data (Ostafew: [0044]); determining that the location is associated with a vehicle trajectory based on a determination that a vehicle traveling on the vehicle trajectory would pass through the location (Ostafew: [0154]); determining, based on the location being associated with the vehicle trajectory, that the object is currently blocking the vehicle trajectory (Ostafew: [0154]); determine a likelihood (Ostafew: [0043]) that the object, over a portion of an object trajectory associated with the object moving through the environment in a same direction of travel as the vehicle in a same lane as the vehicle (Ostafew: [0060]; [0090]), limits the vehicle trajectory for anon-zero threshold period of time (Ostafew: [ABS]); determine that the likelihood meets or exceeds a threshold likelihood (Ostafew: [0090]; [0096]; [0100], [0318], [0323]); and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time (Ostafew: [ABS]). To the examiner’s best understanding, Ostafew discloses “limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time” [ABS]. However, should it be found Ostafew fails to disclose, limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time, in the same field of endeavor, Maurer discloses, limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time (Maurer: [0045]), for the benefit of minimizing collision probability. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ostafew to include a plurality of threshold times before performing an avoidance maneuver taught by Maurer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to minimize collision probability. REGARDING CLAIM 7, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, Maurer teaches, the vehicle trajectory is a first vehicle trajectory and wherein determining that the object is currently blocking the vehicle trajectory is based determining that the object impedes a progress of the vehicle along the first vehicle trajectory (Maurer: [0045] In addition, FIG. 2 schematically shows sojourn probability … T must be less than or equal to this maximum collision probability; [FIG. 2, 4] the vehicle trajectory is a first vehicle trajectory and wherein determining that the object is at least partially blocking the vehicle trajectory is based at least in part on determining that the object impedes the progress of the vehicle along the first vehicle trajectory can be observed.), the method further comprising: determining a threshold distance to navigate around the object (Maurer: [FIG. 4] determining a threshold distance to navigate around the object can be observed (P_GV, ij).); and determining a second vehicle trajectory based on the threshold distance (Maurer: [0045] In addition, FIG. 2 schematically shows sojourn probability distribution P.sub.FG(t.sub.1) of … T must be less than or equal to this maximum collision probability.), wherein controlling the vehicle comprises causing the vehicle to traverse the environment based the second vehicle trajectory (Maurer: [ABS]). REGARDING CLAIM 8, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, Maurer also teaches, determining the likelihood that the object will continue to block the vehicle trajectory comprises: determining an object trajectory (Maurer: [0025] FIG. 3 shows a graph, shown as an example, having a plurality of sojourn probability distributions at a particular time t', as a function of the x coordinate.); and determining, based on the object trajectory, that the object will substantially remain at the location that is associated with the vehicle trajectory and will continue to impede the progress of the vehicle traveling on the vehicle trajectory (Maurer: [ABS]; [0045] In addition, FIG. 2 schematically shows sojourn probability distribution … a maximum permitted collision probability can be specified, such that a collision probability for vehicle F on trajectory T must be less than or equal to this maximum collision probability.). REGARDING CLAIM 9, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, controlling the vehicle comprises causing the vehicle to circumnavigate the object according a second vehicle trajectory (Maurer: [FIG. 2, 4] determine a second vehicle trajectory based at least in part on the predicted object trajectory, the second vehicle trajectory is associated with the vehicle navigating around the object, control the vehicle according to the second vehicle trajectory can be observed; [0035-0036]; see FIG. 5.), the method further comprising: identifying a second object based on the sensor data (Maurer: [0037] In a method step S06, a further current position … can be fashioned in driver assistance system 10; see FIG. 5.); determining a second object trajectory associated with the second object; determining an intersection between the second object trajectory and the second vehicle trajectory; controlling the vehicle based on the intersection (Maurer: [0037-0041]). REGARDING CLAIM 10, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, the location of the object is a first location of the object, the method further comprising: determining a second location of the object (Maurer: [FIG. 2] the location of the object is a first location of the object, the method further comprising: determining a second location of the object can be observed ((X_1, T_1); (X_2, T_2)).); determining that the second location of the object does not impede a progress of the vehicle traveling on the vehicle trajectory (Maurer: [FIG. 2] T, T'); determining, based on the second location of the object not impeding the progress of the vehicle, that the object is not at blocking the vehicle trajectory (Maurer: [FIG. 2] T, T'); and controlling the vehicle according to the vehicle trajectory (Maurer: [FIG. 2, 4] determine a second vehicle trajectory based at least in part on the predicted object trajectory, the second vehicle trajectory is associated with the vehicle navigating around the object, control the vehicle according to the second vehicle trajectory can be observed; [0035-0036]; see FIG. 5.). REGARDING CLAIM 11, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, Maurer also teaches, the likelihood that the object will continue to block the vehicle trajectory is based on at least one of: a classification of the object (Maurer: [FIG. 2] determine that the predicted object trajectory indicates a continued blocking of the first vehicle trajectory can be observed at P(t_2) at X_2; [0045] In addition, FIG. 2 schematically shows sojourn probability distribution … T must be less than or equal to this maximum collision probability.); a position of the object (Maurer: [0032] The first current state of motion of pedestrian FG can include information concerning orientation, speed, acceleration, standing, walking, and/or running, relating to pedestrian FG.); the location of the object in the environment (Maurer: [0032] The first current state of motion of pedestrian FG can include information concerning orientation, speed, acceleration, standing, walking, and/or running, relating to pedestrian FG.); a size of the object (Maurer: [0040] For example, a second feature … driver assistance system 10 can have a second feature determination device.); a level of stability associated with the object (Maurer: [0032] The first current state of motion of pedestrian FG can include information concerning orientation, speed, acceleration, standing, walking, and/or running, relating to pedestrian FG.); a velocity of the object (Maurer: [0032] The first current state of motion of pedestrian FG can include information concerning orientation, speed, acceleration, standing, walking, and/or running, relating to pedestrian FG.); or a change in the velocity of the object (Maurer: [0032] The first current state of motion of pedestrian FG can include information concerning orientation, speed, acceleration, standing, walking, and/or running, relating to pedestrian FG.). REGARDING CLAIM 15, Ostafew discloses, identifying an object at a location in an environment based on sensor data (Ostafew: [0044]); determining that the location is associated with a vehicle trajectory based on a determination that a vehicle traveling on the vehicle trajectory would pass through the location (Ostafew: [0154]); determining, based on the location being associated with the vehicle trajectory, that the object is currently blocking the vehicle trajectory (Ostafew: [0154]); determine a likelihood (Ostafew: [0043]) that the object, over a portion of an object trajectory associated with the object moving through the environment in a same direction of travel as the vehicle in a same lane as the vehicle (Ostafew: [0060]; [0090]), limits the vehicle trajectory for anon-zero threshold period of time (Ostafew: [ABS]); determine that the likelihood meets or exceeds a threshold likelihood (Ostafew: [0090]; [0096]; [0100], [0318], [0323]); and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time (Ostafew: [ABS]). To the examiner’s best understanding, Ostafew discloses “limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time” [ABS]. However, should it be found Ostafew fails to disclose, limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time, in the same field of endeavor, Maurer discloses, limits the first vehicle trajectory for a non-zero threshold period of time, and controlling the vehicle based on the likelihood that the object will limit the vehicle trajectory for at least the non-zero threshold period of time (Maurer: [0045]), for the benefit of minimizing collision probability. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ostafew to include a plurality of threshold times before performing an avoidance maneuver taught by Maurer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to minimize collision probability. REGARDING CLAIM 16, as best understood, Ostafew, as modified, remain as applied above to claim 15, and further, Maurer teaches, determining a first distance between a lane marker depicting an edge of a lane and the location of the object (Maurer: [FIG. 2]); determining a second distance comprising the first distance minus a width of the vehicle (Maurer: [FIG. 2]); and determining a second vehicle trajectory based at least in part on the second distance (Maurer: [0034]; [0045]), wherein controlling the vehicle comprises causing the vehicle to travel according to the second vehicle trajectory (Maurer: [0034]; [0045]). REGARDING CLAIM 17, as best understood, Ostafew, as modified, remain as applied above to claim 16, and further, the instructions further cause the processors to perform operations comprising: determining a vehicle speed associated with the second distance (Maurer: [0034]; [0045]), wherein the second vehicle trajectory comprises the vehicle speed (Maurer: [0034]). Maurer does not explicitly recite the terminology "and determine a vehicle speed associated with the distance". However, Maurer does teach orientation, speed, and acceleration of a vehicle, and time to collision with pedestrian. It is the examiners assertion that time-to-collision teaches "speed associated with distance". Thus, teaching "determine a vehicle speed associated with the distance". REGARDING CLAIM 20, as best understood, Ostafew, as modified, remain as applied above to claim 15, and further, Maurer also teaches, the likelihood that the object will continue to block the vehicle trajectory is based on at least one of: a classification of the object (Maurer: [FIG. 2]; [0045]); a position of the object (Maurer: [0032]); the location of the object in the environment (Maurer: [0032]); a size of the object (Maurer: [0040]); a level of stability associated with the object (Maurer: [0032]); a velocity of the object (Maurer: [0032]); or a change in the velocity of the object (Maurer: [0032]). Claims 4, 12-13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ostafew (US 20210031760 A1) in view of Maurer (US 20150210311 A1) as applied to claim 15 above, and further in view of Paris (US 20180326982 A1). REGARDING CLAIM 4, as best understood, Ostafew, as modified, remain as applied above to claim 1, and further, Maurer discloses, determine that a motion of the object is in accordance with the predicted object trajectory (Maurer: [ABS]); wherein determining the second vehicle trajectory is based on determining that the motion of the object is in accordance with the predicted object trajectory (Maurer: [ABS]). Ostafew, as modified, does not explicitly disclose, emit at least one of an audio signal or a visual signal in a direction associated with the object. However, in the same field of endeavor, Paris discloses, “In one variation, the autonomous vehicle can also selectively execute Block S150 based on whether the autonomous vehicle has determined that the pedestrian has visually observed the autonomous vehicle. For example, the autonomous vehicle can implement methods and techniques described above to estimate the gaze of the pedestrian in Block S120. If the autonomous vehicle determines that the pedestrian's gaze has not yet met the autonomous vehicle and the pedestrian is within a threshold distance of the autonomous vehicle's planned route, the pedestrian's estimated path is within a threshold distance of the autonomous vehicle's planned route, and/or the confidence score for the pedestrian's path is less than the threshold confidence, the autonomous vehicle can: broadcast an audio track (e.g., an audible alarm signal); track the pedestrian following replay of the audio track in Block S122, as described below; and then cease broadcast of the audio track once the autonomous vehicle determines that the gaze of the pedestrian has intersected the autonomous vehicle” (Paris: [0047]); [FIG. 1] Element S150, based on object location; [FIG. 2] emitting at least one of an audio signal or a visual signal in a direction based at least in part on the location of the object can be observed, for the benefit of calculating a revised intent of the pedestrian based on actions of the pedestrian following replay of the audio track. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Ostafew to include broadcasting an alarm taught by Paris. One of ordinary skill in the art would have been motivated to make this modification in order to calculate a revised intent of the pedestrian based on actions of the pedestrian following replay of the audio track. REGARDING CLAIM 12, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, Ostafew, as modified, does not explicitly disclose, emitting at least one of an audio signal or a visual signal in a direction based at on the location of the object; and performing at least one of: based on determining that the location of the object remains the same, controlling the vehicle around the location; or based on determining a change in the location of the object to a second location that does not cause the object to block the vehicle trajectory, controlling the vehicle according to the vehicle trajectory. However, in the same field of endeavor, Paris teaches, “In one variation, the autonomous vehicle can also selectively execute Block S150 based on whether the autonomous vehicle has determined that the pedestrian has visually observed the autonomous vehicle. For example, the autonomous vehicle can implement methods and techniques described above to estimate the gaze of the pedestrian in Block S120. If the autonomous vehicle determines that the pedestrian's gaze has not yet met the autonomous vehicle and the pedestrian is within a threshold distance of the autonomous vehicle's planned route, the pedestrian's estimated path is within a threshold distance of the autonomous vehicle's planned route, and/or the confidence score for the pedestrian's path is less than the threshold confidence, the autonomous vehicle can: broadcast an audio track (e.g., an audible alarm signal); track the pedestrian following replay of the audio track in Block S122, as described below; and then cease broadcast of the audio track once the autonomous vehicle determines that the gaze of the pedestrian has intersected the autonomous vehicle” (Paris: [0047]); (Paris: [FIG. 1]) Element S150, based on object location; (Paris: [FIG. 2]) emitting at least one of an audio signal or a visual signal in a direction based at least in part on the location of the object can be observed; (Paris: [FIG. 2]) Flow chart, based at least in part on determining that the location of the object remains substantially the same, controlling the vehicle around the location; or based at least in part on determining a change in the location of the object to a second location that does not at least partially block the vehicle trajectory, controlling the vehicle according to the vehicle trajectory, can be observed, for the benefit of collision mitigation. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a vehicle disclosed by a modified Ostafew to include pedestrian tracking taught by Paris. One of ordinary skill in the art would have been motivated to make this modification in order to calculate a revised intent of the pedestrian based on actions of the pedestrian following replay of the audio track. REGARDING CLAIM 13, Ostafew, as modified, remain as applied above to claim 6, and further, Ostafew, as modified, does not explicitly disclose, receiving at least one of an audio signal via a microphone or a visual signal via a camera; and determining that the at least one of the audio signal or the visual signal comprises an indication that the object does not intend to move out of the vehicle trajectory, wherein the likelihood that the object will continue to block the vehicle trajectory is based at least in part on the indication. However, in the same field of endeavor, Paris teaches, “The autonomous vehicle can additionally or alternatively include: a set of infrared emitters configured to project structured light into a field near the autonomous vehicle; a set of infrared detectors (e.g., infrared cameras); and a processor configured to transform images output by the infrared detector(s) into a depth map of the field. The autonomous vehicle can also include one or more color cameras facing outwardly from the front, rear, and left lateral and right lateral sides of the autonomous vehicle. For example, each camera can output a video feed containing a sequence of digital photographic images (or “frames”), such as at a rate of 20 Hz. Furthermore, the autonomous vehicle can include a set of infrared proximity sensors arranged along the perimeter of the base of the autonomous vehicle and configured to output signals corresponding to proximity of objects and pedestrians within one meter of the autonomous vehicle. The controller within the autonomous vehicle can thus fuse data streams from the LIDAR sensor(s), the color camera(s), and the proximity sensor(s) into one real-time scan image of surfaces (e.g., surfaces of roads, sidewalks, road vehicles, pedestrians, etc.) around the autonomous vehicle per scan cycle, as shown in FIG. 1. Alternatively, the autonomous vehicle can stitch digital photographic images—output by multiple color cameras arranged throughout the autonomous vehicle—into a scan data or 3D point cloud of a scene around the autonomous vehicle” (Paris: [0016]); “the autonomous vehicle can track the pedestrian's motion, revise the predicted intent of the pedestrian according to the pedestrian's post-prompt motion, and calculate an increased confidence score for the revised intent of the pedestrian given stronger direct motion of the pedestrian” (Paris: [0012]), for the benefit of collision mitigation. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a vehicle disclosed by Maurer in view of Shalev-Shwartz to include pedestrian tracking taught by Paris. One of ordinary skill in the art would have been motivated to make this modification in order to calculate a revised intent of the pedestrian based on actions of the pedestrian following replay of the audio track. REGARDING CLAIM 19, as best understood, Ostafew, as modified, remain as applied above to claim 15, and further, Ostafew, as modified, do not explicitly disclose, the instructions further cause the processors to perform operations comprising: receiving an indication that the object does not intend to move out of the vehicle trajectory, wherein the likelihood that the object will continue to at block the vehicle trajectory is based on the indication. However, in the same field of endeavor, Paris teaches, “the autonomous vehicle can track the pedestrian's motion, revise the predicted intent of the pedestrian according to the pedestrian's post-prompt motion, and calculate an increased confidence score for the revised intent of the pedestrian given stronger direct motion of the pedestrian” (Paris: [0012]), for the benefit of collision mitigation. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a vehicle disclosed by Maurer in view of Shalev-Shwartz to include pedestrian tracking taught by Paris. One of ordinary skill in the art would have been motivated to make this modification in order to mitigate collision with a pedestrian or other object. Claim 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ostafew (US 20210031760 A1) in view of Maurer (US 20150210311 A1) as applied to claims 6 and 15 above, and further in view of Zhang (US 20190072966 A1). REGARDING CLAIM 14, as best understood, Ostafew, as modified, remain as applied above to claim 6, and further, Ostafew, as modified, discloses maneuvering a vehicle with consideration for passenger comfort [0046]. Ostafew, as modified, does not explicitly disclose, controlling the vehicle comprises determining an action for the vehicle to take, wherein the action comprises at least one of: maintaining a stopped position in the vehicle trajectory; or determining a second vehicle trajectory around the object; and wherein the action is determined based on at least one of: a safety cost associated with the action, wherein the safety cost is on a relative state between the object and the vehicle; a comfort cost associated with the action, wherein the comfort cost is based on the relative state between the object and the vehicle; a progress cost associated with the action, wherein the progress cost is based on a vehicle delay associated with the action; or an operational rules cost associated with the action, wherein the operational rules cost is based on one or more regulations associated with the environment. However, in the same field of endeavor, Zhang teaches, [0058] The trajectory processing module 173 can score the first proposed trajectory as related to the predicted trajectories for any of the proximate agents. The score for the first proposed trajectory relates to the level to which the first proposed trajectory complies with pre-defined goals for the host vehicle 105, including safety, efficiency, legality, passenger comfort, and the like. Minimum score thresholds for each goal can be pre-defined. For example, score thresholds related to turning rates, acceleration or stopping rates, speed, spacing, etc. can be pre-defined and used to determine if a proposed trajectory for host vehicle 105 may violate a pre-defined goal. If the score for the first proposed trajectory, as generated by the trajectory processing module 173 based on the predicted trajectories for any of the proximate agents, may violate a pre-defined goal, the trajectory processing module 173 can reject the first proposed trajectory and the trajectory processing module 173 can generate a second proposed trajectory, for the benefit of creating alternative trajectories with the safest score possible. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a vehicle disclosed by a modified Ostafew to include safety and comfort scores taught by Zhang. One of ordinary skill in the art would have been motivated to make this modification in order to crate alternative trajectories with the safest score possible. REGARDING CLAIM 18, as best understood, Ostafew, as modified, remain as applied above to claim 15, and further, Ostafew, as modified, do not explicitly disclose, controlling the vehicle comprises: determining a first trajectory for the vehicle to take to navigate around the object; determining a second trajectory for the vehicle to take to navigate around the object determining a first cost associated with the first trajectory and a second cost associated with the second trajectory, wherein the first cost and the second cost comprise at least one of a safety cost, a comfort cost, a progress cost, or an operational rules cost associated with the respective trajectories; determining that the first cost associated with the first trajectory is less than the second cost associated with the second trajectory; and causing the vehicle to navigate around the object according to the first trajectory. However, in the same field of endeavor, Zhang teaches, “The trajectory processing module 173 can score the first proposed trajectory as related to the predicted trajectories for any of the proximate agents. The score for the first proposed trajectory relates to the level to which the first proposed trajectory complies with pre-defined goals for the host vehicle 105, including safety, efficiency, legality, passenger comfort, and the like. Minimum score thresholds for each goal can be pre-defined. For example, score thresholds related to turning rates, acceleration or stopping rates, speed, spacing, etc. can be pre-defined and used to determine if a proposed trajectory for host vehicle 105 may violate a pre-defined goal. If the score for the first proposed trajectory, as generated by the trajectory processing module 173 based on the predicted trajectories for any of the proximate agents, may violate a pre-defined goal, the trajectory processing module 173 can reject the first proposed trajectory and the trajectory processing module 173 can generate a second proposed trajectory… Minimum score thresholds for each goal can be pre-defined. For example, score thresholds related to turning rates, acceleration or stopping rates, speed, spacing, etc. can be pre-defined and used to determine if a proposed trajectory for host vehicle 105 may violate a pre-defined goal. If the score for the first proposed trajectory, as generated by the trajectory processing module 173 based on the predicted trajectories for any of the proximate agents, may violate a pre-defined goal, the trajectory processing module 173 can reject the first proposed trajectory and the trajectory processing module 173 can generate a second proposed trajectory. The second proposed trajectory and the current context of the host vehicle 105 can be provided to the trajectory prediction module 175 for the generation of a new set of predicted trajectories and confidence levels for each proximate agent as related to the second proposed trajectory and the context of the host vehicle 105. The new set of predicted trajectories and confidence levels for each proximate agent as generated by the trajectory prediction module 175 can be output from the trajectory prediction module 175 and provided to the trajectory processing module 173. Again, the trajectory processing module 173 can use the predicted trajectories and confidence levels for each proximate agent corresponding to the second proposed trajectory to determine if any of the predicted trajectories for the proximate agents may cause the vehicle 105 to violate a pre-defined goal based on a related score being below a minimum acceptable threshold. If the score for the second proposed trajectory, as generated by the trajectory processing module 173 based on the new set of predicted trajectories for any of the proximate agents, may violate a pre-defined goal, the trajectory processing module 173 can reject the second proposed trajectory and the trajectory processing module 173 can generate a third proposed trajectory. This process can be repeated until a proposed trajectory generated by the trajectory processing module 173 and processed by the trajectory prediction module 175 results in predicted trajectories and confidence levels for each proximate agent that cause the proposed trajectory for the host vehicle 105 to satisfy the pre-defined goals based on a related score being at or above a minimum acceptable threshold. Alternatively, the process can be repeated until a time period or iteration count is exceeded. If the process of an example embodiment as described above results in predicted trajectories, confidence levels, and related scores that satisfy the pre-defined goals, the corresponding proposed trajectory 220 is provided as an output from the prediction-based trajectory planning module 200 as shown in FIG. 6” (Zhang: [0058]), for the benefit of creating alternative trajectories with the safest score possible. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a vehicle disclosed by Maurer in view of Shalev-Shwartz to include safety and comfort scores taught by Zhang. One of ordinary skill in the art would have been motivated to make this modification in order to crate alternative trajectories with the safest score possible. Response to Arguments Applicant’s arguments with respect to the rejection of the independent claim(s) under 35 USC §103, obviousness, have been considered but are moot because the new ground of rejection does not rely on the combined references applied in the prior rejection of record for matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gochev (US 20190161080 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S./Examiner, Art Unit 3663 /ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Sep 27, 2019
Application Filed
Jul 16, 2021
Non-Final Rejection — §103, §112
Oct 15, 2021
Interview Requested
Oct 28, 2021
Applicant Interview (Telephonic)
Oct 28, 2021
Examiner Interview Summary
Nov 15, 2021
Response Filed
Dec 27, 2021
Final Rejection — §103, §112
Feb 09, 2022
Interview Requested
Feb 24, 2022
Applicant Interview (Telephonic)
Feb 24, 2022
Examiner Interview Summary
Apr 06, 2022
Notice of Allowance
Apr 06, 2022
Response after Non-Final Action
May 11, 2022
Response after Non-Final Action
Jun 17, 2022
Response after Non-Final Action
Jun 29, 2022
Response after Non-Final Action
Aug 01, 2022
Non-Final Rejection — §103, §112
Oct 03, 2022
Interview Requested
Oct 14, 2022
Applicant Interview (Telephonic)
Oct 17, 2022
Examiner Interview Summary
Nov 07, 2022
Response Filed
Dec 06, 2022
Final Rejection — §103, §112
Jan 10, 2023
Interview Requested
Jan 19, 2023
Applicant Interview (Telephonic)
Jan 19, 2023
Examiner Interview Summary
Feb 09, 2023
Response after Non-Final Action
Mar 13, 2023
Notice of Allowance
Mar 16, 2023
Examiner Interview Summary
Mar 16, 2023
Applicant Interview (Telephonic)
Jun 08, 2023
Response after Non-Final Action
Jun 08, 2023
Response after Non-Final Action
Jun 18, 2023
Response after Non-Final Action
Jun 26, 2023
Response after Non-Final Action
Jul 26, 2023
Response after Non-Final Action
Jul 27, 2023
Response after Non-Final Action
Aug 23, 2023
Response after Non-Final Action
Dec 06, 2023
Response after Non-Final Action
Dec 07, 2023
Response after Non-Final Action
Dec 07, 2023
Response after Non-Final Action
Jul 29, 2025
Response after Non-Final Action
Sep 29, 2025
Request for Continued Examination
Oct 09, 2025
Response after Non-Final Action
Dec 16, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12482356
TRANSPORT MANAGEMENT DEVICE, TRANSPORT MANAGEMENT METHOD, AND TRANSPORT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Patent 12454311
STEER-BY-WIRE STEERING DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Oct 28, 2025
Patent 12428170
METHODS AND APPARATUS FOR AUTOMATIC DRONE RESUPPLY OF A PRODUCT TO AN INDIVIDUAL BASED ON GPS LOCATION, WITHOUT HUMAN INTERVENTION
2y 5m to grant Granted Sep 30, 2025
Patent 12427974
MULTIPLE MODE BODY SWING COLLISION AVOIDANCE SYSTEM AND METHOD
2y 5m to grant Granted Sep 30, 2025
Patent 12372360
Methods and Systems for Generating Alternative Routes
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
45%
Grant Probability
58%
With Interview (+12.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month