DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9 October 2025 has been entered.
Summary
The Amendment filed on 9 October 2025 has been acknowledged.
Claims 1 – 16 and 20 have been amended.
Claim 21 is newly presented
Currently, claims 1 – 21 are pending and considered as set forth.
Response to Arguments
Applicant’s arguments with respect to claims 1 – 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 2, 6 – 11, 13 – 14, 16 – 17 and 19 - 21 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari et al. (Hereinafter Tiwari) (US 2018/0154899) in view of Packer et al. (Hereinafter Packer) and in further view of Google Cloud Blog (Hereinafter Google) (https://cloud.google.com/blog/products/ai-machine-learning/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu).
As per claim 1, Tiwari discloses a vehicle controller configured for use in a propelled vehicle comprising sensors including a geolocation sensor and object-detection sensors, the vehicle controller being configured to receive and process inputs derived from the a geolocation sensor and the object-detection sensors (See at least paragraph 26 – 27 and 37 – 38; As shown in FIG. 2, an example of the system 100 for controlling an automotive vehicle can include: a perception module 110 including a sensor subsystem 111, a lane detection block 112, a lane tracking block 113, an object detection block 114, an object tracking block 115, a state estimation block 116, and a cost mapping block 117, wherein the perception module outputs a localization of the vehicle 1101, a cost map of the area proximal the vehicle 1102, and traffic data associated with traffic objects proximal the vehicle 1103. … system modules can include any of a: process-driven module (e.g., equation based module, differential equation module, etc.), fuzzy network module, clustering module, unsupervised machine learning module (e.g., artificial neural network, association rule learning, hierarchical clustering, cluster analysis, outlier detection, convolutional neural network/CNN, etc.), supervised learning module ((e.g., artificial neural network, association rule learning, hierarchical clustering, cluster analysis, outlier detection, convolutional neural network/CNN, etc.), semi-supervised learning module, deep learning module, and/or any other suitable module leveraging any other suitable machine learning method, probabilistic approach, heuristic approach, deterministic approach, and/or any combination thereof. The inputs and/or features (e.g., parameters used in an equation, features used in a machine learning model, factors used in a CNN, etc.) used in a module can be determined through a sensitivity analysis, received from other modules (e.g., as outputs), received from a user account (e.g., from the vehicle operator, from equipment associated with a fleet manager of a set of vehicles, etc.), automatically retrieved (e.g., from an online database, received through a subscription to a data source, etc.), extracted from sampled sensor signals (e.g., images, etc.), determined from a series of sensor signals (e.g., signal changes over time, signal patterns, etc.), and/or otherwise determined. … Examples of sensors and/or data sources on-board the vehicle include: an inertial measurement unit (IMU), ultrasonic sensors, a data port (e.g., on-board diagnostic module port/OBD port), GPS sensor(s) and modules, cameras (e.g., stereoscopic cameras, single lens cameras, etc.), navigation sensors (e.g., LiDAR, radar, ToF, etc.), position sensors (e.g., actuator position sensors, LVDT sensors, etc.), encoders (e.g., rotary encoders that measure the angular position and/or velocity of rotary actuators of the vehicle and/or vehicle control systems), and any other suitable on-board sensors. Sensor data can be pre-processed by the sensor subsystem in and/or computing systems in communication with the sensor subsystem 111, prior to provision to the decision-making block and/or other modules of the vehicle control system. Pre-processing can be performed on camera data, GPS data, radar and/or other range data, and any other suitable data. For example, camera data (e.g., images) can be processed to extract lane markings, vehicle objects, visual landmarks (e.g., stop signs, pedestrian crossing signs, buildings, etc.), and any other suitable features. In another example, GPS coordinates can be combined with a map of the vehicle route (e.g., retrieved from a remote server, stored in an onboard database, etc.) to provide navigation data.);
but Tawari does not explicitly teach limitation of:
a planner to plan vehicle paths based on one or more occupancy grids to track the vehicle and objects within a range of the vehicle; and
an arbitrator to arbitrate between paths.
Packer teaches the limitations of:
a planner to plan vehicle paths based on one or more occupancy grids to track the vehicle and objects within a range of the vehicle (See at least abstract and column 2 line 1 – 47; A vehicle computing system may implement techniques to predict behavior of objects detected by a vehicle operating in the environment. The techniques may include determining a feature with respect to a detected objects (e.g., likelihood that the detected object will impact operation of the vehicle) and/or a location of the vehicle and determining based on the feature a model to use to predict behavior (e.g., estimated states) of proximate objects (e.g., the detected object). The model may be configured to use one or more algorithms, classifiers, and/or computational resources to predict the behavior. Different models may be used to predict behavior of different objects and/or regions in the environment. Each model may receive sensor data as an input, and output predicted behavior for the detected object. Based on the predicted behavior of the object, a vehicle computing system may control operation of the vehicle. … This application describes techniques for applying different models for different objects and/or regions in an environment. In some examples, certain models may employ more computational resources (e.g., allocate more memory, processor cycles, processes, and/or more sophisticated/complex prediction algorithms) than others. For example, more fine-grained pedestrian behaviors may rely on models which take pedestrian hand-signals, eye-gaze, head directions, and the like into account. Such models may be necessary for interacting with a pedestrian in a crosswalk, though not for other pedestrian interactions. In some examples, a model used for a particular object or region may be chosen based on a feature of the particular object or region. By way of example and not limitation, features of an object or region may comprise one or more of a likelihood of the object (or objects in the region) to impact operation of the vehicle (e.g., potential for collision or to cause the vehicle to change trajectory to avoid the object), a classification of the object (e.g., pedestrian, bicycle, vehicle, etc.) or region (e.g., occupied, occluded, etc.), a location of the object or region in the environment relative to the vehicle, a proximity of the object or region to the vehicle, a planned path of the vehicle, or a proximity of the object or region to another object and/or position in the environment, or other factors. The model chosen may direct available computational resources to the most relevant objects during vehicle planning thereby improving vehicle safety as the vehicle navigates in the environment. In addition, by dedicating less computational resources to less relevant objects, models have more computational resources available to devote to behavior predictions for the most relevant objects. Models used by an autonomous driving vehicle as described herein may be designed to perform different levels of prediction processing analogous to how a driver of a traditional vehicle pays attention while driving. For example, models may be designed to devote more computational resources to objects near certain areas (e.g., crosswalks, rows of parked cars, narrow roads, intersections, or school zones, to just name a few), objects of certain types (e.g., groups of pedestrians, skateboarders, kids, animals, etc.), objects that are behaving unpredictably or uncharacteristically (e.g., vehicles or pedestrians violating traffic laws, moving erratically, etc.), and/or unrecognized objects, than to other objects or regions. In some examples, models may devote fewer computational resources to objects that are moving away from the vehicle, are located behind the vehicle, are remote from a planned path of the vehicle, are moving slowly, and/or are otherwise unlikely to impact operation of the vehicle.); and
an arbitrator to arbitrate between paths (See at least abstract).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include a planner to plan vehicle paths based on one or more occupancy grids to track the vehicle and objects within a range of the vehicle; and an arbitrator to arbitrate between paths as taught by Packer in the system of Tiwari, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Tiwari and Packer does not teach limitations of:
at least one processing system providing one or more hardware-based deep neural network accelerators including at least one tensor core, the at least one processing system configured to provide structure; and at least one tensor core of the one or more hardware-based deep learning neural network accelerators.
Google teaches limitations of:
at least one processing system providing one or more hardware-based deep neural network accelerators including at least one tensor core; and at least one tensor core of the one or more hardware-based deep learning neural network accelerators (See at least page 1 paragraph 1 – 2 and 4 - 5; There’s a common thread that connects Google services such as Google Search, Street View, Google Photos and Google Translate: they all use Google’s Tensor Processing Unit, or TPU, to accelerate their neural network computations behind the scenes. We announced the TPU last year and recently followed up with a detailed study of its performance and architecture. In short, we found that the TPU delivered 15–30X higher performance and 30–80X higher performance-per-watt than contemporary CPUs and GPUs. These advantages help many of Google’s services run state-of-the-art neural networks at scale and at an affordable cost. In this post, we’ll take an in-depth look at the technology inside the Google TPU and discuss how it delivers such outstanding performance. Usually, ASIC development takes several years. In the case of the TPU, however, we designed, verified, built and deployed the processor to our data centers in just 15 months. Norm Jouppi, the tech lead for the TPU project (also one of the principal architects of the MIPS processor) described the sprint this way. … The TPU ASIC is built on a 28nm process, runs at 700MHz and consumes 40W when running. Because we needed to deploy the TPU to Google's existing servers as fast as possible, we chose to package the processor as an external accelerator card that fits into an SATA hard disk slot for drop-in installation.).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify using vehicle controller to control vehicle based on planning the behavior and lane management on the road via neural network process of Tiwari and Packer, to include at least one processing system providing one or more hardware-based deep neural network accelerators including at least one tensor core; and at least one tensor core of the one or more hardware-based deep learning neural network accelerators as taught by Google in order to train data using neural network faster than CPU or GPU (Google, page 1).
As per claim 2, Tiwari discloses wherein the planner comprises a basic behavior planner and at least one advanced behavior planner (See at least paragraph 18 and 64; performing vehicle operation tasks and/or actions (e.g., lane changing, gear shifting, braking, etc.) according to deterministic models (e.g., hard-coded models, explicitly programmed sets of rules, a set of static computer-implemented rules, etc.) … The trajectory generator 132 preferably generates one or more trajectories as output(s), and provides the one or more trajectories to the decision-making block 135. The generated trajectories can be two-dimensional trajectories plotted from the current position of the vehicle to a desired future position of the vehicle, wherein the distance between the current position and the future position is determined based on the speed of the vehicle and the rate (e.g., frequency) at which the system module(s) and/or block(s) are executed. For example, a generated trajectory can correspond to the path designated for the vehicle to follow over the next 5 seconds, 15 seconds, 60 seconds, and any other suitable time period. Proximal in time (e.g., coincident with, contemporaneously with, within 1 second of, etc.) the end of the time period corresponding to the completion of travel via the generated trajectory, a new trajectory can be generated by the trajectory generator 132. However, the trajectory generator 132 can additionally or alternatively generate one or more trajectories in any suitable manner.) The Examiner notes that Tiwari can perform basic behavior planning such as basic brake, acceleration and lane change if possible, and advanced behavior planning such as comparing set of trajectories to find a best choice based on the objects around the vehicles.
As per claim 6, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to take a tracked lane graph with wait conditions and in-path objects as input, and to compare each path defined by the tracked lane graph with an output of the lane planner (Tiwari, see at least paragraph 41 and 44 – 45).
As per claim 7, The combination of Tiwari, Packer and Google discloses wherein the planner is further configured to start from edges of the tracked lane graph after a lane change to provide plural paths through the lane graph and to determine a path favored by a match to an output of the lane planner (Tiwari, see at least paragraph 41 – 42 and 56).
As per claim 8, The combination of Tiwari, Packer and Google discloses wherein the planner independently processes the plural paths for longitudinal control independently with longitudinal constraints relevant for the respective path (Tawari, see at least paragraph 46 and 56).
As per claim 9, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to provide a speed-dependent spline interpolation between a main lane and a target lane to the behavior selector to represent a lane change (Tawari, see at least paragraph 62 and 64 – 65).
As per claim 10, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to receive a dynamic occupancy grid as input and to generate a fan of paths around a nominal path for a behavior, and to use the dynamic occupancy grid to check the paths for safety and/or comfort and/or agreement with the nominal path, wherein the dynamic occupancy grid comprises a volumetric array having spatial and temporal dimensions (Tawari, see at least paragraph 35 and 64).
As per claim 11, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to use a lane graph including object poses and a target point and direction based on a route plan, and to map the target point and direction to a matching drivable point and direction in the lane graph (Tawari, see at least paragraph 64 – 65).
As per claim 13, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to use a map graph from a route map and a target node and current node and to search a graph to determine a route through the graph from the current node to the target node (Tawari, see at least paragraph 34 and 36).
As per claim 14, The combination of Tiwari, Packer and Google discloses wherein the planner is configured to perform hierarchical planning using a way-point from the map graph that is sufficiently far away from the current node (Tawari, see at least abstract).
As per claim 16, The combination of Tiwari, Packer and Google discloses wherein the arbitrator is configured to choose and provide a forward lateral and longitudinal trajectory Tawari, see at least paragraph 34 – 35).
As per claim 17, The combination of Tiwari, Packer and Google discloses wherein the geolocation sensor includes a global positioning satellite receiver and the object-detection sensor comprises at least one of an optical sensor, a LIDAR sensor and/or a RADAR sensor (Tawari, see at least paragraph 36 – 37).
As per claim 19, The combination of Tiwari, Packer and Google discloses wherein the processing system includes a system-on- a-chip comprising processing cores allocated to plural partitions executing high performance applications (Tawari, see at least paragraph 78 – 79).
As per claim 20, The combination of Tiwari, Packer and Google discloses wherein the route planner cooperates with a stored world model and predicts behaviors of objects in the world model to plan routes through the world model (Tawari, see at least paragraph 31 and 65, and Packer, see at least abstract and column 2 line 1 – 47).
As per Claim 21, the combination of Tiwari, Packer and Google do not expressly disclose wherein the at least one tensor core comprises a mixed precision tensor core. Google does disclose a tensor core but not specifically indicating that tensor core is a mixed precision tensor core.
The Examiner notes, wherein the at least one tensor core comprises a mixed precision tensor core does not modify the operation of the combination of Tiwari, Packer and Google's method and system, and to have modified the method and system of the combination of Tiwari, Packer and Google to have included at least one tensor core comprises a mixed precision tensor core would have been obvious to the skilled artisan because the inclusion of such step would have been an obvious matter of design choice in light of the method and system already discloses by the combination of Tiwari, Packer and Google. Such modification would not have otherwise affected the method and system of the combination of Tiwari, Packer and Google and would have merely represented one of numerous steps or elements that the skilled artisan would have found obvious for the purposes already disclosed by the combination of Tiwari, Packer and Google. Additionally, applicant has not persuasively demonstrated the criticality of providing this step versus the steps discloses by the combination of Tiwari, Packer and Google.
Claims 3 – 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari, Packer and Google and in further view of Redding et al. (Hereinafter Redding) (US 2018/0089563).
As per claim 3, the combination of Tiwari, Packer and Google teaches all the element of the claimed invention but does not explicitly teach element of: wherein the planner scores or determines feasibility of different behaviors.
Redding teaches element of: wherein the planner scores or determines feasibility of different behaviors (See at least paragraph 6)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the planner scores or determines feasibility of different behaviors as taught by Redding in the system of the combination of Tiwari, Packer and Google, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 4, The combination of Tiwari, Packer, Google and Redding discloses a matching component configured to match feasible behavior(s) planned by a behavior planner with high priority behavior provided by a lane planner (Redding, see at least paragraph 6).
As per claim 5, The combination of Tiwari, Packer, Google and Redding discloses wherein the planner is configured to plan behaviors of staying in lane, following any forks in a lane graph to match behaviors requested by the lane planner, and changing lane to maximize the match (Redding, see at least paragraph 4 and 6).
As per claim 12, The combination of Tiwari, Packer, Google and Redding discloses wherein the planner is further configured to perform a graph search on the lane graph from a current edge in the lane graph to find a shortest path to the target point (Redding, see at least paragraph 61).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Tiwari, Packer and Google in further view of Kobliarov et al. (Hereinafter Kobliarov) (US 10671076).
As per claim 15, the combination of Tiwari, Packer and Google does not explicitly disclose wherein the route planner is further configured to produce a planned path comprising waypoints to be used as targets for a lane planner.
Kobliarov teaches wherein the route planner is further configured to produce a planned path comprising waypoints to be used as targets for a lane planner (See at least column 6 line 57 – 65).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the route planner is further configured to produce a planned path comprising waypoints to be used as targets for a lane planner as taught by Kobliarov in the system of the combination of Tiwari, Packer and Google, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Tiwari, Packer and Google in further view of Chen (CN 106525075A)
As per claim 18, the combination of Tiwari, Packer and Google does not explicitly disclose wherein the processing system is distributed at least in part in the cloud.
Chen teaches wherein the processing system is distributed at least in part in the cloud (See at least page 2 last paragraph; Here and Gaode plans to adopt a crowdsourcing method to assist in updating the high precision map, so as to reduce the consumption of the computing resources and reduce the cost).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify controlling a vehicle using behavior planner utilizing the data from sensors of the combination of Tiwari, Packer and Google, to include cloud network as taught by Chen in order to reduce processing resource and reduce the cost (page 2 last paragraph).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
IG T AN
Primary Examiner
Art Unit 3662
/IG T AN/Primary Examiner, Art Unit 3662