Prosecution Insights
Last updated: April 17, 2026
Application No. 16/558,077

Autonomous Vehicle, Motion, and Object Predictability System

Non-Final OA §103
Filed
Aug 31, 2019
Examiner
ALGEHAIM, MOHAMED A
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
12 (Non-Final)
59%
Grant Probability
Moderate
12-13
OA Rounds
3y 3m
To Grant
81%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
122 granted / 207 resolved
+6.9% vs TC avg
Strong +22% interview lift
Without
With
+21.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 207 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/02/2026 has been entered. Status of Claims Claims 1-28 of U.S. Application No. 16/558077 filed on 02/02/2026 have been examined. Office Action is in response to the Applicant's amendments and remarks filed02/02/2026. Claims 1-5, 7, 9, 11-14, 17, & 19-20 are presently amended and Claims 21-28 are newly added. Claims 1-28 are presently pending and are presented for examination. Response to Arguments In regards to the previous claim interpretation under 35 U.S.C. § 112(f): Applicants amendments do not overcome the previous 35 U.S.C. 112(f) claim interpretation. Further Applicant does not provide any separate remarks in regards to the previous 35 U.S.C. 112(f) claim interpretation. Accordingly, the previous 35 U.S.C. 112(f) claim interpretation is maintained. In regards to the previous rejection under 35 U.S.C. § 103:. Applicant’s arguments with respect to the independent claim(s) have been considered but are moot because the new combination for grounds of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new grounds of rejection is made in view of in view of US 2018/0208195A1 (“Hutcheson”). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an identification module configured to identify one or more objects including a first object” “an object trajectory module configured to determine a plurality of predicted paths of the first object…” “a path planning module configured to create a plurality of paths to avoid the first object and the plurality of predicted paths of the first object…” in claim 1. A review of the specification shows that the following appears to be the corresponding structure for the above limitation described in the specification: (see at least Applicant Specification, para. [0093]: These modules may be implemented in software stored in one or more non-transitory computer readable mediums for execution by one or more processors 802. A number of processors 802 may be utilized by the AV including CPUs, GPUs, specialized artificial intelligence processors, and FPGAs specifically developed for the calculation of various recognition programs and probabilistic movement models.). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3, 5-6, 23 & 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0250626A1 (“Ghafarianzadeh”), in view of US 2020/0064483A1 (“Li”), in view of US 2017/0132334A1 (“Levinson”), in view of US 2018/0208195A1 (“Hutcheson”). As per claim 1 Ghafarianzadeh discloses A system including an autonomous vehicle comprising: the autonomous vehicle configured with a plurality of lidar devices (see at least Ghafarianzadeh, para. [0068]: The sensor(s) 412 may include, for example, one or more lidar sensors,); an identification module configured to identify from the data one or more objects including a first object and the plurality of predicted paths of the first object (see at least Ghafarianzadeh, para. [0070]: For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object.); an object trajectory module configured to determine a plurality of predicted paths of the first object including a probabilistic distribution of the plurality of predicted paths of the first object based at least in part on an identification of the first object (see at least Ghafarianzadeh, para. [0070]: In some examples, perception engine 426 may be configured to predict more than an object trajectory of one or more objects. For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object. para. [0077]: The selection may be based at least in part on a current route, the probability that the stationary vehicle is a blocking vehicle, current vehicle trajectory, and/or detected object trajectory data. Upon selecting a trajectory, the planner 430 may transmit the trajectory to the drive system 416 to control the example vehicle system 402 according to the selected trajectory.); a path planning module configured to create a plurality of paths to avoid the first object and at least one of the plurality of predicted paths of the first object (see at least Ghafarianzadeh, Fig. 7, para. [0038-0040]: The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204. For example, FIG. 2 illustrates two example trajectories that the planner might determine in alternate scenarios. para. [0070]: For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object. & para. [0097]: At operation 712, the example process 700 may include controlling the vehicle based, at least in part, on the probability, according to any of the techniques discussed herein. For example, this may include controlling the vehicle to pass a blocking vehicle (712(A)) or controlling the vehicle to wait for a non-blocking vehicle (712(B)).); wherein a selected path of the plurality of paths to avoid the first object is based at least in part on a module configured to utilize a rating associated with a classification type of the first object, and one or more objects attributes, wherein the rating is based on a difference between the plurality of predicted paths of the first object and one or more historical paths of a type of the first object (see at least Ghafarianzadeh, para. [0038-0039]: In some examples, the BV ML model 214 may be generated (learned) from labeled feature data. For example, the BV ML model 214 may include a deep-learning model that learns to output a probability that a stationary vehicle 208 is a blocking vehicle based on input sample feature values that are associated with labels that indicate whether the sample feature values came from scenario where a stationary vehicle 208 was a blocking vehicle or not (i.e., a ground truth label)… At operation 216, the example process 200 may include transmitting the probability determined by the perception engine to a planner that determines a trajectory for controlling the autonomous vehicle, as depicted at 218 and 220. The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204. para. [0046-0048] & para. [0070]), wherein the selected path is enabled to be utilized by the autonomous vehicle to move along the selected path (see at least Ghafarianzadeh, para. [0038-0039]: … At operation 216, the example process 200 may include transmitting the probability determined by the perception engine to a planner that determines a trajectory for controlling the autonomous vehicle, as depicted at 218 and 220. The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204.). Ghafarianzadeh does not explicitly disclose the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on a top of the autonomous vehicle, a second lidar device positioned proximate to a front of the autonomous vehicle, and a third lidar device positioned proximate to a back of the autonomous vehicle; wherein a selected path of the plurality of paths to avoid the first object is based at least in part on a machine learning module configured to utilize a variability rating associated with a classification type of the first object and a database of object attributes, wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a historical path of a type of the first object; and wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of an environment around the autonomous vehicle. Li teaches the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on a top of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units. & para. [0125]: Any of the sensors provided herein may rotate relative to the vehicle. The one or more sensors may rotate about one axis, two axes, or three axes, relative to the vehicle.), a second lidar device positioned proximate to a front of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units. ), and a third lidar device positioned proximate to a back of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on a top of the autonomous vehicle, a second lidar device positioned proximate to a front of the autonomous vehicle, and a third lidar device positioned proximate to a back of the autonomous vehicle of Li, with a reasonable expectation of success, in order to improve the operational safety of vehicles, and enable these vehicles to be self-piloted in a safe manner (see at least Li, para. [0005]). Levinson teaches wherein a selected path of the plurality of paths to avoid the first object is based at least in part on a machine learning module configured to utilize a variability rating associated with a classification type of the first object and a database of object attributes, wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a historical path of a type of the first object (see at least Levinson, para. [0060]: Note that some candidate trajectories may be ranked or associated with higher degrees of confidence than other candidate trajectories…. para. [0065]: If the external object is labeled as dynamic, and further data about the external object may indicate a typical level of activity and velocity, as well as behavior patterns associated with the classification type. Further data about the external object may be generated by tracking the external object. As such, the classification type can be used to predict or otherwise determine the likelihood that an external object may, for example, interfere with an autonomous vehicle traveling along a planned path. para. [0075]: At 506, data representing objects based on the least two subsets of sensor data may be derived at a processor…A confidence level is determined at 512 to exceed a range of acceptable confidence levels associated with normative operation of an autonomous vehicle. Therefore, in this case, a confidence level may be such that a certainty of selecting an optimized path is less likely, whereby an optimized path may be determined as a function of the probability of facilitating collision-free travel, complying with traffic laws, providing a comfortable user experience (e.g., comfortable ride), and/or generating candidate trajectories on any other factor. para. [0087-0089]: At 1208, a local position is determined at a planner based on local pose data. At 1210, a state of operation of an autonomous vehicle may be determined (e.g., probabilistically), for example, based on a degree of certainty for a classification type and a degree of certainty of the event, which is may be based on any number of factors, such as speed, position, and other state information…para. [0094] & para. [0148]: Further, dynamic object data modeler 3621 may generate a data model describing predictive motion of object 3682b in relation to interactions with other dynamic objects, such as dynamic object 3680 or dynamic object 3682a, which is shown as a dog in motion. In the absence of dynamic object 3682a, dog 3682b may be associated with a first probability of engaging in an activity (e.g., leaping forward and running). However, in the event that dog 3682b encounters or interacts with (or chases) dog 3682a (having a predicted range of motion 3683), the probability that dog 3682b engages in the activity may increase sharply. For instance, the probability that dog 3682b leaps forward and instinctively chases dog 3682a may increase from about 10% (e.g., based on, for example, logged data) to about 85%.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein a selected path of the plurality of paths to avoid the first object is based at least in part on a machine learning module configured to utilize a variability rating associated with a classification type of the first object and a database of object attributes, wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a historical path of a type of the first object of Levinson, with a reasonable expectation of success, in order to provide a comfortable user experience (e.g., comfortable ride) (see at least Levinson, para. [0075]). Hutcheson teaches wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of an environment around the autonomous vehicle (see at least para. [0057]: The RA score may be adjusted by information collected by a vehicle about its surroundings using its on-board perception sensors. The vehicle may measure the parameters of operation of other vehicles and determine a need to update the RA score. For example, the vehicle may determine a new vehicle has been detected which is driving at a speed above the speed limit and higher than a prespecified , and use this information to update the overall RA score. para. [0070-0071]: Modes of Operation. Referring to FIG. 4A, a risk value, X, computed for a vehicle by the risk classifier may, in addition to being transmitted to other vehicles, be fed to a driving mode controller which may determine an allowed mode of vehicle operation. In some embodiments, the vehicle may change its driving behavior, or mode of operation, based on changes in the RA score… For example, available of operation may be defined as , Moderate, or Conservative. If the vehicle is in a low risk probability state, the mode can include any of the three, and possibly other, modes of operation.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of an environment around the autonomous vehicle of Hutcheson, with a reasonable expectation of success, in order to gradually reduce the performance of vehicles approaching denser traffic to a level that is consistent with the threat of additional collisions (see at least Hutcheson, para. [0046]). As per claim 3 Ghafarianzadeh discloses wherein the identification module is further configured to determine whether the first object of the one or more objects is above a speed limit and is human driven or autonomous; and the identification module is further configured to determine whether the first object is associated with an unpredictable behavior based at least in part on an artificial intelligence labeled training data set (see at least Ghafarianzadeh, para. [0019-0020]: The perception engine may receive sensor data and, based, at least in part, on the sensor data detect an object in the environment of the autonomous vehicle 102, classify that object as some type of vehicle, and determine that the sensor data indicates that a velocity of the detected vehicle does not exceed a predetermined threshold velocity (e.g., a sensed velocity of the vehicle is not greater than or equal to 0.1 meters per second or 0.05 meters per second). As used herein a vehicle may be, for example and without limitation, a means of physical transportation such as a passenger vehicle, a delivery truck, a bicycle, a drone for transporting objects, etc. & para. [0030-0034]: For example, the table below illustrates an example of feature values 210 determined by the perception engine that correspond to features upon which the BV ML may rely on to determine the probability that the stationary vehicle 208 is a blocking vehicle. In the example given, some feature values 210 were either not determined by the perception engine or were not applicable or available from the sensor data (i.e., “Blocked by Another Object,” “Other Object Behavior”). Some of these features, and others, are discussed in regards to FIGS. 3A-3F. ). As per claim 5 Ghafarianzadeh discloses the autonomous vehicle is configured to be guided by a autonomous vehicle; and the autonomous vehicle is configured for vehicle to vehicle communication including communication of at least one path between the autonomous vehicle and the separate autonomous vehicle over a wireless connection (see at least Ghafarianzadeh, para. [0066-0073]: In various implementations, the network interface 410 may support communication via wireless general data networks, such as a Wi-Fi network, and/or telecommunications networks, such as, for example, cellular communication networks, satellite networks, and the like.; the sensor data discussed herein may be received at a first vehicle and transmitted to a second vehicle. In some examples, sensor data received from a different vehicle may be incorporated into the feature values determined by the perception engine. For example, the sensor data received from the first vehicle may be used to fill in a feature value that was unavailable to the second vehicle and/or to weight feature values determined by the second vehicle from sensor data received at the second vehicle.; perception engine 426 may be configured to predict more than an object trajectory of one or more objects. For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object.). As per claim 6 Ghafarianzadeh discloses wherein a classification of the one or more objects of the environment is determined by a server prior to the autonomous vehicle entering an environment (see at least Ghafarianzadeh, Fig. 2 & para. [0032]: In some examples, the autonomous vehicle 204 may attempt to determine feature values 210 for at least a subset of possible features for which the BV ML model is configured. For example, the table below illustrates an example of feature values 210 determined by the perception engine that correspond to features upon which the BV ML may rely on to determine the probability that the stationary vehicle 208 is a blocking vehicle. In the example given, some feature values 210 were either not determined by the perception engine or were not applicable or available from the sensor data & para. [0066-0073]). As per claim 23 Ghafarianzadeh does not explicitly disclose wherein the aggressive driving mode is enabled on a condition that a safety score of the environment satisfies a threshold. Hutcheson teaches wherein the aggressive driving mode is enabled on a condition that a safety score of the environment satisfies a threshold (see at least Hutcheson, para. [0055]: At some point, such as after updates have been received from all nearby vehicles, the current updated RA may be evaluated by a driving mode controller 466. Based on the current updated RA, the vehicle may select a driving (e.g., , moderate, conservative), and control its operation(e.g., dynamics) according to such mode.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the aggressive driving mode is enabled on a condition that a safety score of the environment satisfies a threshold of Hutcheson, with a reasonable expectation of success, in order to gradually reduce the performance of vehicles approaching denser traffic to a level that is consistent with the threat of additional collisions (see at least Hutcheson, para. [0046]). As per claim 26 Ghafarianzadeh does not explicitly disclose the aggressive driving mode includes following an aggressive driving path; and the aggressive driving mode is enabled based on communication and coordination with other autonomous vehicles. Hutcheson teaches the aggressive driving mode includes following an aggressive driving path; and the aggressive driving mode is enabled based on communication and coordination with other autonomous vehicles (see at least Hutcheson, para. [0053-0055]: The received RA[1] may be updated in the RA Sum 462, and an updated RA for the vehicle communicated via V2V to nearby vehicles (464). In some embodiments, the vehicle may gather its sensor data, together with its own dynamics with its sensors, for communication with the updated RA to nearby vehicles. Similar sequences may occur for each nearby vehicle as its V2V communications are received by the current vehicle, up through vehicle n… At some point, such as after updates have been received from all nearby vehicles, the current updated RA may be evaluated by a driving mode controller 466. Based on the current updated RA, the vehicle may select a driving (e.g., , moderate, conservative), and control its operation(e.g., dynamics) according to such mode.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the aggressive driving mode includes following an aggressive driving path; and the aggressive driving mode is enabled based on communication and coordination with other autonomous vehicles of Hutcheson, with a reasonable expectation of success, in order to gradually reduce the performance of vehicles approaching denser traffic to a level that is consistent with the threat of additional collisions (see at least Hutcheson, para. [0046]). Claim 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2016/0355181A1 (“Morales”). As per claim 2 Ghafarianzadeh does not explicitly disclose the autonomous vehicle is configured to execute the selected path responsive to a detection of a pose of the first object. Morales teaches the autonomous vehicle is configured to execute the selected path responsive to a detection of a pose of the first object (see at least Morales, para. [0083-0087]: For the type of an animal that does not react to a warning by sound or light but may enter the traveling road of the vehicle, avoiding the animal by braking or steering is an efficient assistance mode. More specifically, as schematically shown in FIG. 4B, the edge images a and b representing two legs are detected in the edge image when the object is a pedestrian (bipedal walking object) (figure in the middle), and the edge image ‘a’ representing the outline is detected when the object is a vehicle (figure on the right). On the other hand, the edge images a, b, c, d, and e of the four legs and the neck are detected when the object is a tetrapod (figure on the left). Therefore, it can be determined whether the image d of the moving object is a tetrapod animal by determining whether there are edge images a, b, c, d, and e of the four legs and the neck in the edge image S of the difference image.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the autonomous vehicle is configured to execute the selected path responsive to a detection of the position of a pose of the first object of Morales, with a reasonable expectation of success, in order to detect the possibility of collision (see at least Morales, para. [0003]). Claim 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2019/0096256A1 (“Rowell”), further in view of US 2020/0086854A1 (“Liu”). As per claim 4 Ghafarianzadeh does not explicitly disclose wherein the object trajectory module is further configured to determine the plurality of predicted paths based on a profile of the classification type of the first object associated with a historical corresponding behavior of staying on an anticipated path; and the path planning module is further configured to determine a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold. Rowell teaches wherein the object trajectory module is further configured to determine the plurality of predicted paths based on a profile of the classification type of the first object associated with a historical corresponding behavior of staying on an anticipated path (see at least Rowell, para. [0021-0023]: the vehicle 100 may classify each of the objects into an object classification, which will be described below with reference to FIG. 2. Once the objects are classified, the vehicle 100 may predict a trajectory of each of the objects based on a behavior of the object determined from a model corresponding to the object classification. The details of predicting a trajectory will be described below with reference to FIGS. 3-8. Objects may be either in a non-obstacle position or an obstacle position. The non-obstacle position is a position that is not in the driving trajectory of the vehicle 100. For example, if an object is on the sidewalk, the object is in the non-obstacle position because the object is not in the driving trajectory (e.g., road) of the vehicle 100. & para. [0074-0076]: The one or more processors 102 predict a trajectory of the object based on behavior characteristics of the object determined from a model corresponding to the object classification. Thus, the one or more processors 102 may predict the trajectory of the pet as a trajectory in random direction or a trajectory toward a person nearby. As another example, if the object is classified as a vehicle, the one or more processors 102 predict the trajectory of the vehicle based on the behavior characteristics of the vehicle such as “following roads,” “making turn at intersection,” “following traffic rules,” etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the object trajectory module is further configured to determine the plurality of predicted paths based on a profile of the classification type of the first object associated with a historical corresponding behavior of staying on an anticipated path of Rowell, with a reasonable expectation of success, in order to effectively inform objects at issue (see at least Rowell, para. [0037]). Liu teaches the path planning module is further configured to determine a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold (see at least Liu, para. [0011]: A computer in a host vehicle can identify one or more target vehicles and assess a threat of collision between the host and target vehicles. Based on the assessed threat, the computer can determine whether performing an intervention to change deceleration of the host vehicle can avoid a collision, and/or can notify a vehicle occupant or user of a recommended action, i.e., braking, steering, and/or accelerating, to avoid a collision. The computer is programmed to assess the threat of collision between host and target vehicles based on predicted lateral and longitudinal distances between the vehicles according to data including respective lengths, widths, and headings of host and target vehicles. Advantageously, a precise evaluation of a possible collision can be provided, and intervention or action to avoid the collision can be minimal, i.e., can include slowing, decelerating, or accelerating the host vehicle and/or steering the host vehicle to safely pass the target vehicle., para. [0017]: Data collectors 110 could also include sensors or the like for detecting conditions outside the host vehicle 101, e.g., medium-range and long-range sensors. For example, sensor data collectors 110 could include mechanisms such as radar, LIDAR, sonar, cameras or other image capture devices, that could be deployed to detect stationary and/or moving objects, including other vehicles, detect a speed, a direction and/or dimensions of an object such as another vehicle, measure a distance between the vehicle 101 and an object, para. [0036-0038]: A threat number TN is a numeric value that provides a relative likelihood of a collision between a host vehicle 101 and a target vehicle 201. For example, a threat number of zero could indicate no risk of collision between vehicles 101, 201, whereas a threat number greater than zero could indicate some risk of collision, the risk being greater the greater the threat number. A general threat number TN can be based on one or more constituent threat numbers, including an Acceleration Threat Number ATN and a Steering Threat Number STN.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the path planning module is further configured to determine a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold of Liu, with a reasonable expectation of success, in order to precisely evaluate a possible collision and minimizing the collision (see at least Liu, para. [0011]). Claim 7 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2016/0339910A1 (“Jonasson”). As per claim 7 Ghafarianzadeh does not explicitly disclose wherein the autonomous vehicle is configured to make a lurching movement in an aggressive driving mode prior to the autonomous vehicle moving on the aggressive motion path. Jonasson teaches wherein the autonomous vehicle is configured to make a lurching movement to indicate likely movement prior to the autonomous vehicle moving on the aggressive motion path, and the aggressive motion path is executed responsive to a communication with the first object by a remote cloud device (see at least Jonasson, para. [0024-0026]: Moreover, since the second boundary represents a wide trajectory passing—at least partly—close to—or relatively close to—the at least first detected environment boundary, the second boundary of the drivable zone is determined such that it indicates a fictive trajectory which margins to the environment boundary may be relatively small, for instance a few centimeters up to several meters depending on e.g. the speed of the vehicle. For instance, the second boundary may represent an “aggressive style” evasion maneuver, which may require a relatively large yaw and lateral acceleration at the beginning of the maneuver while still providing that continued driving of the vehicle is maintained within the drivable zone without crossing the environment boundary.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of an aggressive driving movement comprises of a lurching movement to indicate likely movement prior to the autonomous vehicle moving on the aggressive path, and the aggressive path is implemented responsive to a communication with the first object or when the autonomous vehicle is enabled to be controlled by a remote cloud device of Moshchuk, with a reasonable expectation of success, in order to an improved evasive maneuver approach for a vehicle at risk of an impending or probable collision with an obstacle (see at least Jonasson, para. [0006]). Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2018/0341822 (“Hovis”). As per claim 8 Ghafarianzadeh does not explicitly disclose further comprising a priority schema to determine which of the one or more objects to process as the first object. Hovis teaches further comprising a priority schema to determine which of the one or more objects to process as the first object (see at least Hovis, Fig. 7 & para. [0079-0081]: From steps 612, 614, or 616, the object is assigned a detailed classification based on the new (n+1) or retained (n) classification level from a SDS tree. FIG. 7 shows an exemplary SDS tree 700 having a plurality of classification levels from a base level n=1 to a predetermined level where n=N. As the classification levels progress from a lower number toward N, the details in the classification of the object increase in fidelity.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of a priority schema to determine which of the one or more objects to process as the first object of Hovis, with a reasonable expectation of success, in order to further classify the branches in greater detail of the definition of the object (see at least Hovis, para. [0080]). As per claim 9 Ghafarianzadeh discloses wherein the autonomous vehicle is configured to utilize a first group of the one or more objects for object identification and a subset of the first group for object recognition (see at least Ghafarianzadeh, para. [0032]: At operation 208, the example process 200 may include determining feature values 210 based at least in part on the sensor data. In some examples, the feature values 210 may correspond to features specified by a blocking vehicle (BV) ML model. The BV ML model may be configured to determine a probability of whether the stationary vehicle 208 is a blocking vehicle based on feature values 210 determined by a perception engine of the autonomous vehicle 204. In some examples, the autonomous vehicle 204 may attempt to determine feature values 210 for at least a subset of possible features for which the BV ML model is configured. & para. [0059]: In sum, the perception engine may determine one or more feature values (e.g., 15 meters from stationary vehicle to junction, “delivery truck,” green light, traffic flow data including the velocities of other detected vehicles and/or an indication of whether a velocity of the stationary vehicle is anomalous compared to all other vehicles, vehicles of a same lane, or another subset of detected vehicles) that correspond to feature(s) upon which the BV ML model has been trained.). As per claim 10 Ghafarianzadeh discloses wherein at least part of the object recognition is performed by the autonomous vehicle and at least part of the object recognition is performed by a cloud server or an edge computing device (see at least Ghafarianzadeh, para. [0072-0073]: The BV ML model 428 may include instructions stored on memory 406 that, when executed by the processor(s) 404, configure the processor(s) 404 to receive feature values associated with elements of the environment in which the vehicle system 402 exists, and determine a probability that the stationary vehicle is a blocking vehicle. The BV ML model 428 may include a decision tree(s), and/or deep learning algorithm(s), having nodes through which feature values may be pushed to determine and output. The perception engine 426 may transmit the probability that the stationary vehicle is a blocking vehicle to the planner 430 along with any other additional information that the planner 430 may use to generate a trajectory (e.g., object classifications, object tracks, vehicle pose). In some examples, the perception engine 426 and/or the planner 430 may additionally or alternatively transmit a blocking vehicle indication via the network interface 410 to the remote computing device 422 via network 424 and/or another vehicle 418 via network 420, based, at least in part, on the probability determined by the perception engine 426.). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, further in view of US 2017/0221283A1 (“Pal”). As per claim 11 Ghafarianzadeh does not explicitly disclose wherein a scoring system is configured to score the one or more objects with a variability rating when the one or more objects are present in an intersection with a history of accidents. Pal teaches wherein a scoring system is configured to score the one or more objects with a variability rating when the one or more objects are present in an intersection with a history of accidents (see at least Pal, para. [0074-0077]: collecting traffic data. Traffic data can include any one or more of: accident data (e.g., number of accidents within a predetermined radius of the user, accident frequency, accident rate, types of accidents, frequency of accidents, etc.), traffic level, traffic laws (e.g., speed limit, intersection laws, turning laws), traffic lights, type of vehicular path (e.g., freeway, intersection, etc.), and/or other suitable traffic data.; Contextual data can include any one or more of: temporal data (e.g., time of day, date, etc.), driver data, mobile electronic device usage (e.g., driver texting, usage of smartphone while driving, etc.), vehicle model data (e.g., model, age, accident history, mileage, repair history, etc.), light sensor data (e.g., associated with a user's mobile electronic device, etc.), and/or any other suitable contextual data. para. [0084]: Block S130 can include calculating a vehicle braking profile and/or stopping distance from movement data (and/or from supplementary data). A vehicle braking profile can be calculated from vehicle deceleration over time. Stopping distance can be calculated from distance traveled between initiation of deceleration and a vehicle stopping. In another example, Block S130 can include identify or estimating an acceleration feature describing changes in vehicle acceleration (e.g., for use in determining an accident severity score in Block S160). & para. [0146]: Accident severity scores are preferably generated based on movement data (e.g., measured acceleration magnitude, acceleration direction, pre-impact speed, etc.), but can additionally or alternatively be based on supplemental data, other accident characteristics, and/or any suitable data. For example, the method 100 can include determining an accident severity score based on a set of proximity features (e.g., extracted from proximity data collected at a vehicle proximity sensor) and the set of movement features; and selecting a personalized accident response action from a set of accident response actions, based on the accident severity score, and where the personalized accident response action is the accident response action.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein a scoring system is configured to score the one or more objects with a variability rating when the one or more objects are present in an intersection with a history of accidents of Pal, with a reasonable expectation of success, in order to facilitate provision of an accident response action (see at least Pal, para. [0139]). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of Pal, further in view of US 2018/0239361A1 (“Micks”). As per claim 12 Ghafarianzadeh does not explicitly disclose wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in the intersection. However Micks teaches wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in the intersection (see at least Micks, para. [0036-0037]: In one embodiment, the transceiver 118 may also be used to transmit information to other vehicles to potentially assist them in locating vehicles or objects. During V2V communication the transceiver 118 may receive information from other vehicles about their locations, previous locations or states, other traffic, accidents, road conditions, the locations of parking barriers or parking chocks, or any other details that may assist the vehicle and/or automated driving/assistance system 102 in driving accurately or safely.; Thus, the automated driving/assistance system 102 may be able to determine a distance from the infrastructure transceivers based on the time stamp and then determine its location based on the location of the infrastructure transceivers…V2X communication may also be used to provide information about locations of other vehicles, their previous states, or the like. For example, V2X communications may include information about how long a vehicle has been stopped or waiting at an intersection & para. [0047-0048]: For example, the vehicle 302, or a driver intent component 104 of the vehicle 302, may determine that a blinker for the vehicle 304 is off or on and may determine a direction (e.g., left or right) that corresponds to the blinker. The vehicle 302 may infer an intention of the driver of the vehicle 304 based on the state of the turn signal indicator. Based on the inferred intent, the vehicle 302 may slow down speed up, and/or turn to avoid a potential collision; the vehicle 302 may obtain information from a stored map, stored driving history, or from wireless signals. For example, an infrastructure transmitter 306 is shown near the road 300, which may provide specific positioning, environmental attribute details, or other information to the vehicle 302. As further examples, the vehicle 302 may receive information from other vehicles, such as vehicle 304, or from a wireless communication network, such as a mobile communication network. & para. [0074]: The driving maneuver may determine a driving path to avoid collision with the other vehicles in case they perform the predicted driving maneuvers. For example, the driving maneuver component 518 may determine whether to decelerate, accelerate, and/or turn a steering wheel of the parent vehicle. In one embodiment, the driving maneuver component 518 may determine a timing for the driving maneuver. For example, the driving maneuver component 518 may determine that a parent vehicle should wait at an intersection for a period of time because another vehicle is likely to proceed through the intersection during that time period.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in the intersection of Micks, with a reasonable expectation of success, in order to assist the vehicle and/or automated driving/assistance system in driving accurately or safely (see at least Micks, para. [0036]). Claims 13-15, & 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Levinson, in view of Li, in view of Hutcheson,. As per claim 13 Ghafarianzadeh discloses A method for configuring an autonomous vehicle, comprising: acquiring, by one or more sensor acquisition devices of an autonomous vehicle, a data of an environment (see at least Ghafarianzadeh, para. [0024]: For example , global map data and a GPS location received from a sensor of the autonomous vehicle 102 might indicate that a junction 104 lies 100 meters in front of the autonomous vehicle 102 ; sensor data may indicate that the traffic light 106 is green and , in coordination with global map data, LIDAR data , and / or camera data ( though any other sensor modality is contemplated )…); identifying, an identification module, from the data of the environment an identification of a first object (see at least Ghafarianzadeh, para. [0014]: The perception engine may include one or more ML models and/or other computer-executable instructions for detecting, identifying, classifying, and/or tracking objects from sensor data collected from the environment of the autonomous vehicle…& para. [0070].); determining, by an object trajectory module, a plurality of predicted paths of the first object including a probabilistic distribution of the plurality of predicted paths of the first object based at least in part on the identification of the first object (see at least Ghafarianzadeh, para. [0070]: For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object.); generating by a path planning module, at least two paths for the autonomous vehicle to travel to avoid the first object and one or more of the plurality of predicted paths of the first object, wherein the path planning module utilize one or more processors (see at least Ghafarianzadeh, Fig. 7, para. [0031]: By limiting how far the autonomous vehicle 204 detects stationary vehicles, the autonomous vehicle 204 conserves processing and storage resources. Additionally, by detecting more than just whether a stationary vehicle exists in a same lane as the autonomous vehicle 204, the planner of the autonomous vehicle 204 is able to make more sophisticated decisions about which trajectory to generate and/or choose to control the autonomous vehicle 204. para. [0038-0040]: the example process 200 may include transmitting the probability determined by the perception engine to a planner that determines a trajectory for controlling the autonomous vehicle, as depicted at 218 and 220. The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204. For example, FIG. 2 illustrates two example trajectories that the planner might determine in alternate scenarios.), wherein the plurality of predicted paths of the first object is based a machine learning module configured to utilize a rating associated with the first object, a database of objects and one or more attributes, wherein the rating is based on a difference between the plurality of predicted paths of the first object and a prior path associated to the identification of the first object (see at least Ghafarianzadeh, para. [0038-0039]: In some examples, the BV ML model 214 may be generated (learned) from labeled feature data. For example, the BV ML model 214 may include a deep-learning model that learns to output a probability that a stationary vehicle 208 is a blocking vehicle based on input sample feature values that are associated with labels that indicate whether the sample feature values came from scenario where a stationary vehicle 208 was a blocking vehicle or not (i.e., a ground truth label)… At operation 216, the example process 200 may include transmitting the probability determined by the perception engine to a planner that determines a trajectory for controlling the autonomous vehicle, as depicted at 218 and 220. The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204. & para. [0046-0048]); selecting a path from the at least two paths of the path planning module and wherein the autonomous vehicle is configured to move along said selected path (see at least Ghafarianzadeh, para. [0038-0039]: … At operation 216, the example process 200 may include transmitting the probability determined by the perception engine to a planner that determines a trajectory for controlling the autonomous vehicle, as depicted at 218 and 220. The planner may use the probability indicating whether the stationary vehicle 208 is a blocking vehicle to generate a trajectory with which to control the autonomous vehicle 204.) Ghafarianzadeh does not explicitly disclose wherein the plurality of predicted paths of the first object is based on a machine learning module configured to utilize a variability rating associated with the first object, a database of objects, and one or more object attributes, , wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a prior path associated to the identification of first object; the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on the top of the autonomous vehicle, a second lidar device positioned proximate to a front of the autonomous vehicle, and a third lidar device positioned proximate to a back of the autonomous vehicle, wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of the environment around the autonomous vehicle. Levinson teaches wherein the plurality of predicted paths of the first object is based on a machine learning module configured to utilize a variability rating associated with the first object, a database of objects, and one or more object attributes, , wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a prior path associated to the identification of first object (see at least Levinson, para. [0060]: Note that some candidate trajectories may be ranked or associated with higher degrees of confidence than other candidate trajectories…. para. [0065]: If the external object is labeled as dynamic, and further data about the external object may indicate a typical level of activity and velocity, as well as behavior patterns associated with the classification type. Further data about the external object may be generated by tracking the external object. As such, the classification type can be used to predict or otherwise determine the likelihood that an external object may, for example, interfere with an autonomous vehicle traveling along a planned path. para. [0075]: At 506, data representing objects based on the least two subsets of sensor data may be derived at a processor…A confidence level is determined at 512 to exceed a range of acceptable confidence levels associated with normative operation of an autonomous vehicle. Therefore, in this case, a confidence level may be such that a certainty of selecting an optimized path is less likely, whereby an optimized path may be determined as a function of the probability of facilitating collision-free travel, complying with traffic laws, providing a comfortable user experience (e.g., comfortable ride), and/or generating candidate trajectories on any other factor. para. [0087-0089]: At 1208, a local position is determined at a planner based on local pose data. At 1210, a state of operation of an autonomous vehicle may be determined (e.g., probabilistically), for example, based on a degree of certainty for a classification type and a degree of certainty of the event, which is may be based on any number of factors, such as speed, position, and other state information…para. [0094] & para. [0148]: Further, dynamic object data modeler 3621 may generate a data model describing predictive motion of object 3682b in relation to interactions with other dynamic objects, such as dynamic object 3680 or dynamic object 3682a, which is shown as a dog in motion. In the absence of dynamic object 3682a, dog 3682b may be associated with a first probability of engaging in an activity (e.g., leaping forward and running). However, in the event that dog 3682b encounters or interacts with (or chases) dog 3682a (having a predicted range of motion 3683), the probability that dog 3682b engages in the activity may increase sharply. For instance, the probability that dog 3682b leaps forward and instinctively chases dog 3682a may increase from about 10% (e.g., based on, for example, logged data) to about 85%.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the plurality of predicted paths of the first object is based on a machine learning module configured to utilize a variability rating associated with the first object, a database of objects, and one or more object attributes, , wherein the variability rating is based on a difference between the plurality of predicted paths of the first object and a prior path associated to the identification of first object of Levinson, with a reasonable expectation of success, in order to provide a comfortable user experience (e.g., comfortable ride) (see at least Levinson, para. [0075]). Li teaches the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on the top of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units. & para. [0125]: Any of the sensors provided herein may rotate relative to the vehicle. The one or more sensors may rotate about one axis, two axes, or three axes, relative to the vehicle.), a second lidar device positioned proximate to a front of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units. ), and a third lidar device positioned proximate to a back of the autonomous vehicle (see at least Li, para. [0080]: The sensors may be within a vehicle housing, outside a vehicle housing, or part of the vehicle housing. The sensors may be distributed on a top surface of a vehicle, bottom surface of a vehicle, front surface of a vehicle, rear surface of a vehicle, right side surface of a vehicle or a left side surface of a vehicle. & para. [0083]: The sensing assembly may comprise one or more lidar 120 units.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the autonomous vehicle configured with a plurality of lidar devices including a spinning lidar device positioned on the top of the autonomous vehicle, a second lidar device positioned proximate to a front of the autonomous vehicle, and a third lidar device positioned proximate to a back of the autonomous vehicle, of Li, with a reasonable expectation of success, in order to improve the operational safety of vehicles, and enable these vehicles to be self-piloted in a safe manner (see at least Li, para. [0005]). Hutcheson teaches wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of an environment around the autonomous vehicle (see at least para. [0057]: The RA score may be adjusted by information collected by a vehicle about its surroundings using its on-board perception sensors. The vehicle may measure the parameters of operation of other vehicles and determine a need to update the RA score. For example, the vehicle may determine a new vehicle has been detected which is driving at a speed above the speed limit and higher than a prespecified , and use this information to update the overall RA score. para. [0070-0071]: Modes of Operation. Referring to FIG. 4A, a risk value, X, computed for a vehicle by the risk classifier may, in addition to being transmitted to other vehicles, be fed to a driving mode controller which may determine an allowed mode of vehicle operation. In some embodiments, the vehicle may change its driving behavior, or mode of operation, based on changes in the RA score… For example, available of operation may be defined as , Moderate, or Conservative. If the vehicle is in a low risk probability state, the mode can include any of the three, and possibly other, modes of operation.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the autonomous vehicle is configured to enable an aggressive driving mode based on a setting and an amount of sensing and perceiving of an environment around the autonomous vehicle of Hutcheson, with a reasonable expectation of success, in order to gradually reduce the performance of vehicles approaching denser traffic to a level that is consistent with the threat of additional collisions (see at least Hutcheson, para. [0046]). As per claim 14 Ghafarianzadeh discloses further comprising: instructing the autonomous vehicle detect a pose of the first object (see at least Ghafarianzadeh, para. [0070]: The perception engine 426 may include instructions stored on memory 406 that, when executed by the processor(s) 404, configure the processor(s) 404 to receive sensor data from the sensor(s) 412 as input, and output data representative of, for example, one or more of the pose (e.g. position and orientation) of an object in the environment surrounding the example vehicle system 402,). As per claim 15 Ghafarianzadeh discloses further comprising: determining, by the identification module, whether the first object is above a speed limit and is human driven or autonomous; and determining, by the identification module, whether the first object is associated with an unpredictable behavior based on an artificial intelligence labeled training data set (see at least Ghafarianzadeh, para. [0019-0020]: The perception engine may receive sensor data and, based, at least in part, on the sensor data detect an object in the environment of the autonomous vehicle 102, classify that object as some type of vehicle, and determine that the sensor data indicates that a velocity of the detected vehicle does not exceed a predetermined threshold velocity (e.g., a sensed velocity of the vehicle is not greater than or equal to 0.1 meters per second or 0.05 meters per second). As used herein a vehicle may be, for example and without limitation, a means of physical transportation such as a passenger vehicle, a delivery truck, a bicycle, a drone for transporting objects, etc. & para. [0030-0034]: For example, the table below illustrates an example of feature values 210 determined by the perception engine that correspond to features upon which the BV ML may rely on to determine the probability that the stationary vehicle 208 is a blocking vehicle. In the example given, some feature values 210 were either not determined by the perception engine or were not applicable or available from the sensor data (i.e., “Blocked by Another Object,” “Other Object Behavior”). Some of these features, and others, are discussed in regards to FIGS. 3A-3F. ). As per claim 17 Ghafarianzadeh discloses wherein: the autonomous vehicle is configured to be guided by the autonomous vehicle; and the autonomous vehicle is configured for vehicle to vehicle communication including communication of at least one path between the autonomous vehicle and the separate autonomous vehicle over a wireless connection (see at least Ghafarianzadeh, para. [0066-0073]: In various implementations, the network interface 410 may support communication via wireless general data networks, such as a Wi-Fi network, and/or telecommunications networks, such as, for example, cellular communication networks, satellite networks, and the like.; the sensor data discussed herein may be received at a first vehicle and transmitted to a second vehicle. In some examples, sensor data received from a different vehicle may be incorporated into the feature values determined by the perception engine. For example, the sensor data received from the first vehicle may be used to fill in a feature value that was unavailable to the second vehicle and/or to weight feature values determined by the second vehicle from sensor data received at the second vehicle.; perception engine 426 may be configured to predict more than an object trajectory of one or more objects. For example, the perception engine 426 may be configured to predict multiple object trajectories based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with an object.). As per claim 18 Ghafarianzadeh discloses wherein a classification of the one or more objects of the environment are determined by a server prior to the autonomous vehicle entering the environment (see at least Ghafarianzadeh, Fig. 2 & para. [0032]: In some examples, the autonomous vehicle 204 may attempt to determine feature values 210 for at least a subset of possible features for which the BV ML model is configured. For example, the table below illustrates an example of feature values 210 determined by the perception engine that correspond to features upon which the BV ML may rely on to determine the probability that the stationary vehicle 208 is a blocking vehicle. In the example given, some feature values 210 were either not determined by the perception engine or were not applicable or available from the sensor data para. [0066-0073]). Claim 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Levinson, in view of Li, in view of Hutcheson, in view of Rowell, further in view of Liu. As per claim 16 Ghafarianzadeh does not explicitly disclose further comprising: determining, by the object trajectory module, the plurality of predicted paths based on a profile of a type of the first object associated with a historical corresponding behavior of staying on an anticipated path; and determining, by the path planning module, a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold. Rowell teaches further comprising: determining, by the object trajectory module, the plurality of predicted paths based on a profile of a type of the first object associated with a historical corresponding behavior of staying on an anticipated path (see at least Rowell, para. [0021-0023]: the vehicle 100 may classify each of the objects into an object classification, which will be described below with reference to FIG. 2. Once the objects are classified, the vehicle 100 may predict a trajectory of each of the objects based on a behavior of the object determined from a model corresponding to the object classification. The details of predicting a trajectory will be described below with reference to FIGS. 3-8. Objects may be either in a non-obstacle position or an obstacle position. The non-obstacle position is a position that is not in the driving trajectory of the vehicle 100. For example, if an object is on the sidewalk, the object is in the non-obstacle position because the object is not in the driving trajectory (e.g., road) of the vehicle 100. & para. [0074-0076]: The one or more processors 102 predict a trajectory of the object based on behavior characteristics of the object determined from a model corresponding to the object classification. Thus, the one or more processors 102 may predict the trajectory of the pet as a trajectory in random direction or a trajectory toward a person nearby. As another example, if the object is classified as a vehicle, the one or more processors 102 predict the trajectory of the vehicle based on the behavior characteristics of the vehicle such as “following roads,” “making turn at intersection,” “following traffic rules,” etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of further comprising: determining, by the object trajectory module, the plurality of predicted paths based on a profile of a type of the first object associated with a historical corresponding behavior of staying on an anticipated path of Rowell, with a reasonable expectation of success, in order to effectively inform objects at issue (see at least Rowell, para. [0037]). Liu teaches determining, by the path planning module, a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold (see at least Liu, para. [0011]: A computer in a host vehicle can identify one or more target vehicles and assess a threat of collision between the host and target vehicles. Based on the assessed threat, the computer can determine whether performing an intervention to change deceleration of the host vehicle can avoid a collision, and/or can notify a vehicle occupant or user of a recommended action, i.e., braking, steering, and/or accelerating, to avoid a collision. The computer is programmed to assess the threat of collision between host and target vehicles based on predicted lateral and longitudinal distances between the vehicles according to data including respective lengths, widths, and headings of host and target vehicles. Advantageously, a precise evaluation of a possible collision can be provided, and intervention or action to avoid the collision can be minimal, i.e., can include slowing, decelerating, or accelerating the host vehicle and/or steering the host vehicle to safely pass the target vehicle., para. [0017]: Data collectors 110 could also include sensors or the like for detecting conditions outside the host vehicle 101, e.g., medium-range and long-range sensors. For example, sensor data collectors 110 could include mechanisms such as radar, LIDAR, sonar, cameras or other image capture devices, that could be deployed to detect stationary and/or moving objects, including other vehicles, detect a speed, a direction and/or dimensions of an object such as another vehicle, measure a distance between the vehicle 101 and an object, para. [0036-0038]: A threat number TN is a numeric value that provides a relative likelihood of a collision between a host vehicle 101 and a target vehicle 201. For example, a threat number of zero could indicate no risk of collision between vehicles 101, 201, whereas a threat number greater than zero could indicate some risk of collision, the risk being greater the greater the threat number. A general threat number TN can be based on one or more constituent threat numbers, including an Acceleration Threat Number ATN and a Steering Threat Number STN.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of determining, by the path planning module, a new path for the autonomous vehicle that comprises an increased distance between the autonomous vehicle and the first object when the historical corresponding behavior is associated to a variability score above a threshold of Liu, with a reasonable expectation of success, in order to precisely evaluate a possible collision and minimizing the collision (see at least Liu, para. [0011]). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Levinson, in view of Li, in view of Hutcheson, in view of Moshchuk. As per claim 19 Ghafarianzadeh does not explicitly disclose wherein the autonomous vehicle is configured to make a lurching movement prior to the autonomous vehicle moving on an aggressive motion path, and the aggressive motion path is configured to be executed responsive to a communication with the first object by a remote cloud device. Moshchuk teaches wherein the autonomous vehicle is configured to make a lurching movement prior to the autonomous vehicle moving on an aggressive motion path, and the aggressive motion path is configured to be executed responsive to a communication with the first object by a remote cloud device (see at least Moshchuk, para. [0024]: The vehicle dynamics measurement device(s) may measure driver input or vehicle dynamics parameters including lateral (i.e., angular or centripetal) acceleration, longitudinal acceleration, lateral jerk (e.g., rate of change of lateral acceleration, jolt, surge, lurch), longitudinal jerk, steering angle, steering torque, steering direction, yaw-rate, lateral and longitudinal velocity, wheel rotation velocity and acceleration, and other vehicle dynamics characteristics of vehicle 10. The measured vehicle dynamics, vehicle conditions, steering measurements, steering conditions, or driver input information may be sent or transferred to system 100 via, for example, a wire link 40 (e.g., a controller area network (CAN) bus, Flex ray bus, Ethernet cable) or a wireless link. The measured vehicle dynamics, vehicle conditions, steering measurements, steering conditions, or driver input information data may be used by system 100 or another system to calculate optimal or desired path curvature, optimal or desired vehicle path, and/or other parameters. & para. [0032-0033]: For example, a vehicle 10 may approach or encounter an object 202 (e.g., a vehicle, stationary object, or other obstacle of object or vehicle width 204) in the road. If vehicle 10 is within a predefined distance to the object 202 that poses a collision threat, within a predefined velocity range, and/or within a predefined acceleration range, system 100 or other systems associated with vehicle 10 may, for example, provide pre-collision preparation and/or warnings to the driver of vehicle 10. The warnings to driver of vehicle may be a signal, for example, an audible warning, a warning light or other form of warning. If the driver does not mitigate the collision threat, collision avoidance control system 100 may control vehicle 10, for example, through automated steering control, automated braking, and/or other controls or maneuvers in order to avoid obstacle 202 or mitigate the impact between vehicle 10 and object 202.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the autonomous vehicle is configured to make a lurching movement prior to the autonomous vehicle moving on an aggressive motion path, and the aggressive motion path is configured to be executed responsive to a communication with the first object by a remote cloud device of Moshchuk, with a reasonable expectation of success, in order to avoid obstacle 202 or mitigate the impact between vehicle 10 and object 202 (see at least Moshchuk, para. [0032]). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Levinson, in view of Li, in view of Hutcheson, further in view of Micks. As per claim 20 Ghafarianzadeh does not explicitly disclose wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in an intersection with a history of accident. However Micks teaches wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in an intersection with a history of accident (see at least Micks, para. [0036-0037]: In one embodiment, the transceiver 118 may also be used to transmit information to other vehicles to potentially assist them in locating vehicles or objects. During V2V communication the transceiver 118 may receive information from other vehicles about their locations, previous locations or states, other traffic, accidents, road conditions, the locations of parking barriers or parking chocks, or any other details that may assist the vehicle and/or automated driving/assistance system 102 in driving accurately or safely.; Thus, the automated driving/assistance system 102 may be able to determine a distance from the infrastructure transceivers based on the time stamp and then determine its location based on the location of the infrastructure transceivers…V2X communication may also be used to provide information about locations of other vehicles, their previous states, or the like. For example, V2X communications may include information about how long a vehicle has been stopped or waiting at an intersection & para. [0047-0048]: For example, the vehicle 302, or a driver intent component 104 of the vehicle 302, may determine that a blinker for the vehicle 304 is off or on and may determine a direction (e.g., left or right) that corresponds to the blinker. The vehicle 302 may infer an intention of the driver of the vehicle 304 based on the state of the turn signal indicator. Based on the inferred intent, the vehicle 302 may slow down speed up, and/or turn to avoid a potential collision; the vehicle 302 may obtain information from a stored map, stored driving history, or from wireless signals. For example, an infrastructure transmitter 306 is shown near the road 300, which may provide specific positioning, environmental attribute details, or other information to the vehicle 302. As further examples, the vehicle 302 may receive information from other vehicles, such as vehicle 304, or from a wireless communication network, such as a mobile communication network. & para. [0074]: The driving maneuver may determine a driving path to avoid collision with the other vehicles in case they perform the predicted driving maneuvers. For example, the driving maneuver component 518 may determine whether to decelerate, accelerate, and/or turn a steering wheel of the parent vehicle. In one embodiment, the driving maneuver component 518 may determine a timing for the driving maneuver. For example, the driving maneuver component 518 may determine that a parent vehicle should wait at an intersection for a period of time because another vehicle is likely to proceed through the intersection during that time period.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the autonomous vehicle is configured to recognize the one or more objects and increase a physical space from the identified one or more objects when the autonomous vehicle is in an intersection with a history of accident of Micks, with a reasonable expectation of success, in order to assist the vehicle and/or automated driving/assistance system 102 in driving accurately or safely (see at least Micks, para. [0036]). Claim 21-22, & 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2019/0113927A1 (“Englard”). As per claim 21 Ghafarianzadeh does not explicitly disclose wherein the aggressive driving mode is enabled on a condition that there are a predetermined number of minimally viable paths. Englard teaches wherein the aggressive driving mode is enabled on a condition that there are a predetermined number of minimally viable paths (see at least Englard, para. [0160]: “A*” refers to a type of search algorithm, generally known in the art, that is used by the motion planner 646 to generate trajectories or paths. A continuous A* planning technique may be used to search a discrete state space and generate a substantially continuous path between a starting point and a destination or between two waypoints along a route. In alternative embodiments, the motion planner 646 may use a discrete A* algorithm to search a discrete state space, where the state space may correspond to the cells of a cost map (or otherwise be derived from the cost map cells), and the A* algorithm may generate a cell-by-cell discrete path through a grid rather than a continuous path. In either case, the current position of the autonomous vehicle may serve as the starting point or “node” for the trajectory/path determination and, in some embodiments and/or scenarios, a desired interim destination of the vehicle (e.g., a next waypoint along a route) may serve as the ending point/node for the trajectory/path determination…& para. [0194]: In one embodiment, for example, block 924 includes dynamically selecting from among the candidate decisions based on the current state of a signal indicating a particular driving style (e.g., aggressive/fast, or smooth with low G-force levels, etc.). The current state may be determined or set by a user (e.g., a passenger) manually selecting that driving style, or by way of an automated selection or setting. In such embodiments, block 924 may include selecting the candidate decision generated by an SDCA that is known to make driving decisions in accordance with the driving style (e.g., based on testing of G-force levels in a vehicle controlled entirely by that single SDCA, or based on a known design strategy for the SDCA).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the aggressive driving mode is enabled on a condition that there are a predetermined number of minimally viable paths of Englard, with a reasonable expectation of success, in order to improved safety and/or other performance aspects of the autonomous vehicle (see at least Englard, para. [0097]). As per claim 22 Ghafarianzadeh does not explicitly disclose the identification module is further configured to identify a predetermined number of objects; the object trajectory module is further configured to identify predicted paths of the predetermined number of objects; and the predicted paths of the predetermined number of objects includes a confidence level above a threshold. Levinson teaches the identification module is further configured to identify a predetermined number of objects (see at least Levinson, para. [0075]: At 506, data representing objects based on the least two subsets of sensor data may be derived at a processor); the object trajectory module is further configured to identify predicted paths of the predetermined number of objects (see at least Levinson, & para. [0148]: Further, dynamic object data modeler 3621 may generate a data model describing predictive motion of object 3682b in relation to interactions with other dynamic objects, such as dynamic object 3680 or dynamic object 3682a, which is shown as a dog in motion. In the absence of dynamic object 3682a, dog 3682b may be associated with a first probability of engaging in an activity (e.g., leaping forward and running). However, in the event that dog 3682b encounters or interacts with (or chases) dog 3682a (having a predicted range of motion 3683), the probability that dog 3682b engages in the activity may increase sharply. For instance, the probability that dog 3682b leaps forward and instinctively chases dog 3682a may increase from about 10% (e.g., based on, for example, logged data) to about 85%.); and the predicted paths of the predetermined number of objects includes a confidence level above a threshold (see at least Levinson, para. [0148]: Further, dynamic object data modeler 3621 may generate a data model describing predictive motion of object 3682b in relation to interactions with other dynamic objects, such as dynamic object 3680 or dynamic object 3682a, which is shown as a dog in motion. In the absence of dynamic object 3682a, dog 3682b may be associated with a first probability of engaging in an activity (e.g., leaping forward and running). However, in the event that dog 3682b encounters or interacts with (or chases) dog 3682a (having a predicted range of motion 3683), the probability that dog 3682b engages in the activity may increase sharply. For instance, the probability that dog 3682b leaps forward and instinctively chases dog 3682a may increase from about 10% (e.g., based on, for example, logged data) to about 85%.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of the identification module is further configured to identify a predetermined number of objects; the object trajectory module is further configured to identify predicted paths of the predetermined number of objects; and the predicted paths of the predetermined number of objects includes a confidence level above a thresholdof Levinson, with a reasonable expectation of success, in order to provide a comfortable user experience (e.g., comfortable ride) (see at least Levinson, para. [0075]). As per claim 24 Ghafarianzadeh does not explicitly disclose wherein the safety score of the environment is based on scores for objects and cost functions for the plurality of paths Englard teaches wherein the safety score of the environment is based on scores for objects and cost functions for the plurality of paths (see at least Englard, para. [0199]: Alternatively, the arbitration machine learning model may be trained using supervised learning, with labels, weights, or scores indicating which SDCA generated the “best” candidate decisions in various situations, according to some suitable criteria. & para. [0213]: At block 968, cost maps are generated based on the observed occupancy grid, the predicted occupancy grid(s), and the navigation data. Each cost map specifies numerical values representing a cost, at a respective instance of time, of occupying certain cells in a two-dimensional representation of the environment (e.g., in an overhead view). The numerical value, or “cost,” for a given cell of the cost map grid (for a cost map corresponding to time t) may represent a risk associated with the autonomous vehicle being in the area of the environment represented by that cell at time t… The numerical values/risks for particular cells may be determined based on the occupancy grids (e.g., current and expected object positions, object types, etc.), the received navigation data (e.g., waypoints indicating the desired route of the autonomous vehicle), and possibly other information (e.g., operational parameters of the autonomous vehicle, detected or predicted behaviors of other objects, etc.).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the safety score of the environment is based on scores for objects and cost functions for the plurality of paths of Englard, with a reasonable expectation of success, in order to improved safety and/or other performance aspects of the autonomous vehicle (see at least Englard, para. [0097]). As per claim 25 Ghafarianzadeh does not explicitly disclose wherein the safety score of the environment is based on a number of recognized objects in the environment and a number of predicted objects in the environment. Englard teaches wherein the safety score of the environment is based on a number of recognized objects in the environment and a number of predicted objects in the environment (see at least Englard, para. [0111]: As indicated above, the lidar system 300 may be used to determine the distance to one or more downrange targets 330. By scanning the lidar system 300 across a field of regard, the system can be used to map the distance to a number of points within the field of regard. Each of these depth-mapped points may be referred to as a pixel or a voxel. A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a point cloud frame) may be rendered as an image or may be analyzed to identify or detect objects or to determine a shape or distance of objects within the field of regard. & para. [0134]: For example, each cell may be associated with one or more values. One such value may correspond to a classification (determined by classification module 512). If a cell is within an area of the occupancy grid that corresponds to an object that has been classified as a pedestrian, for example, the cell (and all other cells corresponding to that same pedestrian) may be associated with the class “pedestrian.” In some embodiments, each such cell is associated with data that uniquely identifies a particular instance within the determined class (e.g., the data string “PED01”to uniquely identify a specific pedestrian within the sensed environment). Cells for which no classification was obtained, and/or cells that do not include any identified object (e.g., due to a low density of points in a particular area of a lidar point cloud), may have special indicators, such as “CLASS?” or “N/A,” for example.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the safety score of the environment is based on a number of recognized objects in the environment and a number of predicted objects in the environment of Englard, with a reasonable expectation of success, in order to improved safety and/or other performance aspects of the autonomous vehicle (see at least Englard, para. [0097]). Claim 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2021/0155269A1 (“Oba”). As per claim 27 Ghafarianzadeh does not explicitly disclose wherein the aggressive driving mode is enabled based on an availability of a human driver to assume control of the autonomous vehicle Oba teaches wherein the aggressive driving mode is enabled based on an availability of a human driver to assume control of the autonomous vehicle (see at least Oba, Fig. 2 & para. [0072-0073]: However, in the high-speed automatic driving permissible section 70, switching from the automatic driving to the manual driving is necessary when an emergency occurs such as an accident. In this case, the driver needs to perform high-speed manual driving. For example, as illustrated in FIG. 2, a section near an accident occurrence point 71 is set as a manual driving required section 72.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the aggressive driving mode is enabled based on an availability of a human driver to assume control of the autonomous vehicle of Oba, with a reasonable expectation of success, in order to improve the infrastructure such that there is surely a configuration that can be sensed by a sensor of the automatic driving vehicle (see at least Oba, para. [0005]). Claim 28 are rejected under 35 U.S.C. 103 as being unpatentable over Ghafarianzadeh, in view of Li, in view of Levinson, in view of Hutcheson, in view of US 2019/0049954A1 (“Mitchell”). As per claim 28 Ghafarianzadeh does not explicitly disclose wherein the aggressive driving mode is enabled after the autonomous vehicle enters a supervisory mode in which the autonomous vehicle is controlled by a remote driving center. Mitchell teaches wherein the aggressive driving mode is enabled after the autonomous vehicle enters a supervisory mode in which the autonomous vehicle is controlled by a remote driving center (see at least Mitchell, para. [0045-0048]: Additionally, the vehicle safety management component 180 could collect data on the drivers of the vehicles, such as how many traffic offenses the driver has had, how many emergency actions the driver has taken while operating the vehicle (e.g., slamming on the brakes to avoid a collision). The vehicle safety management component 180 could then generate a data model that correlates driving behaviors with driver skill. For example, the vehicle safety management component 180 could train a neural network that accepts as inputs a number of driver attributes (e.g., data values collected or derived from a user operating a vehicle) and outputs a predicted driving skill of the user. The vehicle safety management component 180 could then use the neural network to evaluate a particular driver's skill, as the driver operates a vehicle. Other examples include, without limitation, the vehicle safety management component 180 training and using a machine learning classifier to classify the driver into one or more skill categories (e.g., good driver, average driver, poor driver) based on driver attributes of the driver, a machine learning regression model to output an estimated driving skill score for a driver, based on driver attributes, and so on. More generally, any suitable machine learning or data modelling technique can be used, consistent with the functionality described herein… Additionally, the vehicle safety management component 180 can determine whether a minimum level of data communication is available on the data communication networks, in order to enable a supervisory driver assist operational mode (block 465). That is, the vehicle safety management component 180 can determine whether a limited autonomous driving operational mode is appropriate, given the current network conditions (block 475). As an example, such a limited autonomous driving operational mode could maintain autonomous driving functions, but could instruct the user to remain alert to current road conditions. In such an operational mode, the vehicle safety management component 180 could disable select media functionality (e.g., video playback), to help ensure the user remains alert to the current road conditions. Additionally, the vehicle safety management component 180 could require the operator of the vehicle to acknowledge (e.g., verbally, by pressing a button on a touchscreen screen device, etc.) the transition to the limited autonomous driving operational mode.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghafarianzadeh to incorporate the teaching of wherein the aggressive driving mode is enabled after the autonomous vehicle enters a supervisory mode in which the autonomous vehicle is controlled by a remote driving center of Mitchell, with a reasonable expectation of success, in order for the autonomous driving systems within the autonomous vehicle may operate more efficiently under ideal weather conditions (see at least Mitchell, para. [0043]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABDO ALGEHAIM whose telephone number is (571)272-3628. The examiner can normally be reached Monday-Friday 8-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABDO ALGEHAIM/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Aug 31, 2019
Application Filed
Nov 12, 2019
Non-Final Rejection — §103
Jan 25, 2020
Response Filed
Apr 13, 2020
Final Rejection — §103
Aug 14, 2020
Request for Continued Examination
Aug 14, 2020
Response after Non-Final Action
Aug 17, 2020
Response after Non-Final Action
Mar 26, 2021
Non-Final Rejection — §103
Aug 30, 2021
Response Filed
Sep 22, 2021
Final Rejection — §103
Feb 23, 2022
Request for Continued Examination
Feb 28, 2022
Response after Non-Final Action
Mar 25, 2022
Non-Final Rejection — §103
Jul 29, 2022
Response Filed
Sep 22, 2022
Final Rejection — §103
Mar 27, 2023
Request for Continued Examination
Mar 28, 2023
Response after Non-Final Action
Apr 08, 2023
Non-Final Rejection — §103
Oct 11, 2023
Response Filed
Oct 20, 2023
Final Rejection — §103
Apr 23, 2024
Request for Continued Examination
Apr 25, 2024
Response after Non-Final Action
May 04, 2024
Non-Final Rejection — §103
Nov 07, 2024
Response Filed
Mar 02, 2025
Response after Non-Final Action
Apr 07, 2025
Request for Continued Examination
Apr 08, 2025
Response after Non-Final Action
May 01, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103
Dec 01, 2025
Response after Non-Final Action
Feb 02, 2026
Request for Continued Examination
Feb 05, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594963
DETECTING AN UNKNOWN OBJECT BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Apr 07, 2026
Patent 12597865
INVERTER
2y 5m to grant Granted Apr 07, 2026
Patent 12589978
TRUCK-TABLET INTERFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12565235
DETECTING A CONSTRUCTION ZONE BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Mar 03, 2026
Patent 12559228
THERMAL MANAGEMENT SYSTEM FOR AN AIRCRAFT INCLUDING AN ELECTRIC PROPULSION ENGINE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

12-13
Expected OA Rounds
59%
Grant Probability
81%
With Interview (+21.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 207 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month