Prosecution Insights
Last updated: April 19, 2026
Application No. 18/971,526

METHOD AND APPARATUS FOR AUTONOMOUS CONTROL OF VEHICLE

Non-Final OA §102§103§112
Filed
Dec 06, 2024
Examiner
GREENE, DANIEL LAWSON
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
653 granted / 859 resolved
+24.0% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 859 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This is the First Office Action on the Merits and is directed towards claims 1-20 as originally presented and filed on 12/06/2024. Notice of Pre-AIA or AIA Status Priority is claimed as set forth below, accordingly the earliest effective filing date is August 30, 2024 (20240830). The present application, effectively filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). This application claims priority to Korean Patent Application No. 10-2024-0070226 filed on May 29, 2024 and Korean Patent Application No. 10-2024-0117388 filed on August 30, 2024 (20240830). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-13 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 3 and 8 recite(s) the limitation(s )“the front center point of the vehicle”. There is insufficient antecedent basis for this limitation in the claim. Those claims not cited are rejected for depending from a rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 14 and 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20240427325 A1 TO ELLIOTT; David et al. (hereinafter ELLIOTT). Regarding claim 1 ELLIOTT teaches in for example the Figure(s) reproduced immediately below: PNG media_image1.png 483 712 media_image1.png Greyscale PNG media_image2.png 464 708 media_image2.png Greyscale PNG media_image3.png 486 690 media_image3.png Greyscale PNG media_image4.png 788 396 media_image4.png Greyscale PNG media_image5.png 438 703 media_image5.png Greyscale PNG media_image6.png 400 716 media_image6.png Greyscale PNG media_image7.png 766 521 media_image7.png Greyscale PNG media_image8.png 483 643 media_image8.png Greyscale and associated descriptive texts an autonomous control method for controlling operation of a vehicle (as shown in the figures above a Person of Ordinary Skill In The Art (POSITA) would recognize a vehicle 600 being autonomously controlled as explained in for example only paras: “[0106] FIG. 6A is an illustration of an example autonomous vehicle 600, in accordance with some embodiments of the present disclosure. The autonomous vehicle 600 (alternatively referred to herein as the “vehicle 600”) may include, without limitation, a passenger vehicle, such as a car, a truck, a bus, a first responder vehicle, a shuttle, an electric or motorized bicycle, a motorcycle, a fire truck, a police vehicle, an ambulance, a boat, a construction vehicle, an underwater craft, a robotic vehicle, a drone, an airplane, a vehicle coupled to a trailer (e.g., a semi-tractor-trailer truck used for hauling cargo), and/or another type of vehicle (e.g., that is unmanned and/or that accommodates one or more passengers). Autonomous vehicles are generally described in terms of automation levels, defined by the National Highway Traffic Safety Administration (NHTSA), a division of the US Department of Transportation, and the Society of Automotive Engineers (SAE) “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (Standard No. J3016-201806, published on Jun. 15, 2018, Standard No. J3016-201609, published on Sep. 30, 2016, and previous and future versions of this standard). The vehicle 600 may be capable of functionality in accordance with one or more of Level 3-Level 5 of the autonomous driving levels. The vehicle 600 may be capable of functionality in accordance with one or more of Level 1-Level 5 of the autonomous driving levels. For example, the vehicle 600 may be capable of driver assistance (Level 1), partial automation (Level 2), conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on the embodiment. The term “autonomous,” as used herein, may include any and/or all types of autonomy for the vehicle 600 or other machine, such as being fully autonomous, being highly autonomous, being conditionally autonomous, being partially autonomous, providing assistive autonomy, being semi-autonomous, being primarily autonomous, or other designation.”), the autonomous control method comprising: acquiring, by at least one sensor of the vehicle, driving information including one or more of (given the Broadest Reasonable Interpretation (BRI) of the limitation “driving information” with the understanding that “only one” will anticipate the claim language see for example “at least one sensor” in for example Figures 6A-C above as explained in for example only para: “[0111] The controller(s) 636 may provide the signals for controlling one or more components and/or systems of the vehicle 600 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems (“GNSS”) sensor(s) 658 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 660, ultrasonic sensor(s) 662, LIDAR sensor(s) 664, inertial measurement unit (IMU) sensor(s) 666 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 696, stereo camera(s) 668, wide-view camera(s) 670 (e.g., fisheye cameras), infrared camera(s) 672, surround camera(s) 674 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 698, speed sensor(s) 644 (e.g., for measuring the speed of the vehicle 600), vibration sensor(s) 642, steering sensor(s) 640, brake sensor(s) (e.g., as part of the brake sensor system 646), one or more occupant monitoring system (OMS) sensor(s) 601 (e.g., one or more interior cameras), and/or other sensor types.”) a memory path (given the BRI connotes the “path or route” disclosed in for example only para: “[0108] A steering system 654, which may include a steering wheel, may be used to steer the vehicle 600 (e.g., along a desired path or route) when the propulsion system 650 is operating (e.g., when the vehicle is in motion). The steering system 654 may receive signals from a steering actuator 656. The steering wheel may be optional for full automation (Level 5) functionality.”), a user’s driving method (given the BRI connotes the use of autonomous driving capabilities of the vehicle over manual or manual assisted driving (level 5 autonomy), a reference point (given the BRI connotes the current location of the vehicle while enroute to a destination), a forward target point (given the BRI connotes the “waypoints” in para: “[0042] For navigation tasks, in one or more embodiments, the mission controller may compute an optimal route for an autonomous mobile task agent to navigate from its current location to a destination location. The route may be computed using a virtual facility map that describes the physical infrastructure and layout of the facility. Based on the route, the mission controller may generate a behavior model (e.g., a behavior tree) that defines a navigation task sequence that describes the en route tasks the autonomous mobile task agent will need to perform to navigate to the destination location. Depending on the selected route, the autonomous mobile task agent may arrive at waypoints along the way that require the autonomous mobile task agent to take one or more en route actions in order to continue on the route. For example, the facility map may show that a computed route for an autonomous mobile task agent includes, for example, one or more automated doors, turnstiles, lifts (e.g., elevators), moving transport platforms, and/or other synchronization points that the autonomous mobile task agent may need to engage with to complete the task of navigating to its destination. The mission controller may augment the base behavior model obtained from the mission template with behavior models populated with custom parameters relevant to the navigation task sequence corresponding with the en route actions of the selected route.”), a next target point (given the BRI connotes the “waypoints” in para [0042] above), an end point (given the BRI connotes the “destination” disclosed in para: “[0161] The processor(s) 610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce the final image for the player window. The video image compositor may perform lens distortion correction on wide-view camera(s) 670, surround camera(s) 674, and/or on in-cabin monitoring camera sensors. In-cabin monitoring camera sensor is preferably monitored by a neural network running on another instance of the Advanced SoC, configured to identify in cabin events and respond accordingly. An in-cabin system may perform lip reading to activate cellular service and place a phone call, dictate emails, change the vehicle's destination, activate or change the vehicle's infotainment system and settings, or provide voice-activated web surfing. Certain functions are available to the driver only when the vehicle is operating in an autonomous mode, and are disabled otherwise.”), and an obstacle list (see “providing a list of objects and their distances for a 360-degree field of view. “ in para: “[0184] In some examples, the LIDAR sensor(s) 664 may be capable of providing a list of objects and their distances for a 360-degree field of view. Commercially available LIDAR sensor(s) 664 may have an advertised range of approximately 600 m, with an accuracy of 2 cm-3 cm, and with support for a 600 Mbps Ethernet connection, for example. In some examples, one or more non-protruding LIDAR sensors 664 may be used. In such examples, the LIDAR sensor(s) 664 may be implemented as a small device that may be embedded into the front, rear, sides, and/or corners of the vehicle 600. The LIDAR sensor(s) 664, in such examples, may provide up to a 120-degree horizontal and 35-degree vertical field-of-view, with a 200 m range even for low-reflectivity objects. Front-mounted LIDAR sensor(s) 664 may be configured for a horizontal field of view between 45 degrees and 135 degrees.”), from the vehicle or by detecting, by the at least one sensor, surroundings of the vehicle (see para: “[0118] Cameras with a field of view that include portions of the environment in front of the vehicle 600 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well aid in, with the help of one or more controllers 636 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining the preferred vehicle paths. Front-facing cameras may be used to perform many of the same ADAS functions as LIDAR, including emergency braking, pedestrian detection, and collision avoidance. Front-facing cameras may also be used for ADAS functions and systems including Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or other functions such as traffic sign recognition.”); configuring, by a controller, a control method for the vehicle based on a behavior tree by using the driving information (given the BRI of the claim limitation behavior tree see the “Behavior Trees (BTs)” disclosed in for example only the figures above, especially Fig. 5 and paras: “[0026] Behavior Trees (BTs) represent a behavior-model based technology that has reached prominence in recent years for programming the behavior of autonomous agents, such as a robot, so that they may switch between a finite set of tasks so that contingency actions may be executed based on factors that occur during real-time task execution. For example, a behavior tree may comprise nodes classified as root nodes, control flow nodes, and task nodes. These nodes provide logic to guide switching between sequences of behaviors and/or sub-behaviors to handle task failures and/or unexpected challenges more gracefully (e.g., compared to switching between FSM states). However, under presently available technologies, BTs are agent centric. That is, a robot programmed to perform a mission under presently available BT technologies is programmed within the context of an individual actor performing a series of tasks. When completion of a complex task involves multiple autonomous mobile systems, and/or interactions with other task agents, current BT technologies are challenged because they do not holistically address actions of, and/or interactions between, a population of task agents involved in completing the mission. [0096] The method 500, at block B504, includes correlating the baseline task sequence with a plurality of pre-defined modular behavior models from a task library to generate a local task sequence comprising a plurality of local tasks performed by a plurality of task agents. The plurality of task agents may be determined based at least on the mission template. The baseline task sequence may include pre-defined modular behavior models corresponding to actions performed by task agents including mobile autonomous machine agents, specialized task agents, and/or non-operative task agents. A behavior model may comprise a behavior tree that includes logic to control switching between sequences of behaviors. Example task agents include, but are not limited to, an autonomous robot, an autonomous mobile machine, an ego vehicle, an ego machine, an automated door, an elevator, a transport platform, a facility management system, a mechanical tool, an electrical tool, a sensor device, a container, or a composition of matter. In some embodiments, baseline task sequence may include behavior models that represent actions taken by task agents such as people and/or animals as part of completing a mission.”); configuring, by the controller, a driving method for the vehicle based on the driving information and the control method (given the BRI appears to connote Fig. 5 step B508 as explained in for example para: “[0099] The method 500, at block B508, includes assembling a behavior model logic framework defining the plurality of local tasks performed by the plurality of task agents based on the plurality of custom behavior models. The mission behavior model logic framework includes the customized behavior models that may be distributed to task agents for execution, and/or implemented via proxy representations of behavior models by the mission controller itself, to complete performance of the primary task(s) represented by the mission request. In some embodiments, the mission controller 120 may generate a mission behavior model logic framework for a mission instance that accounts for the actions of the plurality of task agents that play a role in completing the overall mission. The mission controller 120 breaks down a mission into a series of behavior-based sequences that are defined by the mission behavior model logic framework for tasks performed by each of the task agents. A mission behavior model logic framework may include synchronization points to coordinate the behaviors of individual task agents.”); and controlling by the controller, the vehicle by applying the driving method (given the BRI appears to connote Fig. 5 step B510 as explained in for example para: “[0100] The method 500, at block B510, includes operating one or more mobile autonomous machine agents of the plurality of task agents by distributing one or more segments of the behavior model logic framework. In some embodiments, the mission controller 120 may push, or otherwise provide, the mission behavior model logic framework, or one or more segments thereof, to a mission dispatch function, such as mission dispatch function 160. As discussed herein, the mission dispatch function 160 may manage communications between the mission controller 120 and the autonomous mobile task agent(s) 170, and may distribute the segments of the mission behavior model logic framework to the one or more autonomous mobile task agents 170 identified in the mission behavior model logic framework. The individual autonomous mobile task agents 170 may then proceed to execute their assigned portions of local task sequences in accordance with the customized behavior models distributed to them by the mission dispatch function 160.”). With regard to the Examiner’s statements of what a Person of Ordinary Skill In The Art (POSITA) is relied upon to understand resort may be had to for example only MPEP 2141.03 Level of Ordinary Skill in the Art [R-01.2024], which states: I. FACTORS TO CONSIDER IN DETERMINING LEVEL OF ORDINARY SKILL The person of ordinary skill in the art is a hypothetical person who is presumed to have known the relevant art at the relevant time. Factors that may be considered in determining the level of ordinary skill in the art may include: (A) "type of problems encountered in the art;" (B) "prior art solutions to those problems;" (C) "rapidity with which innovations are made;" (D) "sophistication of the technology; and" (E) "educational level of active workers in the field. In a given case, every factor may not be present, and one or more factors may predominate." In re GPAC, 57 F.3d 1573, 1579, 35 USPQ2d 1116, 1121 (Fed. Cir. 1995); Custom Accessories, Inc. v. Jeffrey-Allan Indus., Inc., 807 F.2d 955, 962, 1 USPQ2d 1196, 1201 (Fed. Cir. 1986); Environmental Designs, Ltd. V. Union Oil Co., 713 F.2d 693, 696, 218 USPQ 865, 868 (Fed. Cir. 1983). "A person of ordinary skill in the art is also a person of ordinary creativity, not an automaton." KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 421, 82 USPQ2d 1385, 1397 (2007). "[I]n many cases a person of ordinary skill will be able to fit the teachings of multiple patents together like pieces of a puzzle." Id. at 420, 82 USPQ2d 1397. Office personnel may also take into account "the inferences and creative steps that a person of ordinary skill in the art would employ." Id. at 418, 82 USPQ2d at 1396. Although the claims are interpreted in light of the specification, limitations from the specification are NOT imported into the claims. The Examiner must give the claim language the Broadest Reasonable Interpretation (BRI) the claims allow. See MPEP 2111.01 Plain Meaning [R-10.2024], which states II. IT IS IMPROPER TO IMPORT CLAIM LIMITATIONS FROM THE SPECIFICATION "Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment." Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004). See also Liebel-Flarsheim Co. v. Medrad Inc., 358 F.3d 898, 906, 69 USPQ2d 1801, 1807 (Fed. Cir. 2004) (discussing recent cases wherein the court expressly rejected the contention that if a patent describes only a single embodiment, the claims of the patent must be construed as being limited to that embodiment); E-Pass Techs., Inc. v. 3Com Corp., 343 F.3d 1364, 1369, 67 USPQ2d 1947, 1950 (Fed. Cir. 2003) ("Inter US-20100280751-A1 1pretation of descriptive statements in a patent’s written description is a difficult task, as an inherent tension exists as to whether a statement is a clear lexicographic definition or a description of a preferred embodiment. The problem is to interpret claims ‘in view of the specification’ without unnecessarily importing limitations from the specification into the claims."); Altiris Inc. v. Symantec Corp., 318 F.3d 1363, 1371, 65 USPQ2d 1865, 1869-70 (Fed. Cir. 2003) (Although the specification discussed only a single embodiment, the court held that it was improper to read a specific order of steps into method claims where, as a matter of logic or grammar, the language of the method claims did not impose a specific order on the performance of the method steps, and the specification did not directly or implicitly require a particular order). See also subsection IV., below. When an element is claimed using language falling under the scope of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, 6th paragraph (often broadly referred to as means- (or step-) plus- function language), the specification must be consulted to determine the structure, material, or acts corresponding to the function recited in the claim, and the claimed element is construed as limited to the corresponding structure, material, or acts described in the specification and equivalents thereof. In re Donaldson, 16 F.3d 1189, 29 USPQ2d 1845 (Fed. Cir. 1994) (see MPEP § 2181- MPEP § 2186). In Zletz, supra, the examiner and the Board had interpreted claims reading "normally solid polypropylene" and "normally solid polypropylene having a crystalline polypropylene content" as being limited to "normally solid linear high homopolymers of propylene which have a crystalline polypropylene content." The court ruled that limitations, not present in the claims, were improperly imported from the specification. See also In re Marosi, 710 F.2d 799, 802, 218 USPQ 289, 292 (Fed. Cir. 1983) ("'[C]laims are not to be read in a vacuum, and limitations therein are to be interpreted in light of the specification in giving them their ‘broadest reasonable interpretation.'" (quoting In re Okuzawa, 537 F.2d 545, 548, 190 USPQ 464, 466 (CCPA 1976)). The court looked to the specification to construe "essentially free of alkali metal" as including unavoidable levels of impurities but no more.).” Regarding claim 2 and the limitation the autonomous control method of claim 1, wherein configuring the control method includes: checking the obstacle list to determine whether a potential obstacle is contained in the obstacle list (given the BRI see “indicating when the use of a potential route may be blocked or slowed because of an obstacle (e.g., objects and/or spills). “ connotes a potential obstacle as taught in for example only para: “[0052] In one or more embodiments, the mission controller may incorporate route traffic data when determining which route is the optimal route for an autonomous mobile task agent. For example, the mission controller may receive traffic data captured from sensors along a route, and/or reported by other task agents, indicating when the use of a potential route may be blocked or slowed because of an obstacle (e.g., objects and/or spills). Traffic data may indicate when a route is congested due to usage by people or other task agents. Based on the traffic data, the mission controller may assess a weighted penalty time to those routes with detected obstacles and/or congestion (e.g., adding an expected delay time to the time nominally computed for the route) and the optimal route for the autonomous mobile task agent computed taking into account the penalty time. The traffic data may include historical traffic pattern data. For example, the mission controller may determine from historical traffic pattern data that corridors near a facility café or cafeteria experience heavier traffic during a lunch hour time window (e.g., between 11:30 am and 1:00 pm) and factor an additional penalty time delay into route planning and optimization for those corridors. In some embodiments, the mission controller may bias optimization to favor routes that avoid human traffic.”); and updating the obstacle list based on the driving information (it is considered that a POSITA understands that “For example, the deep-learning infrastructure may receive periodic updates from the vehicle 600, such as a sequence of images and/or objects that the vehicle 600 has located in that sequence of images “ and that these “periodic updates” would include updates to all lists including “obstacles” as taught in for example para: “[0212] The deep-learning infrastructure of the server(s) 678 may be capable of fast, real-time inferencing, and may use that capability to evaluate and verify the health of the processors, software, and/or associated hardware in the vehicle 600. For example, the deep-learning infrastructure may receive periodic updates from the vehicle 600, such as a sequence of images and/or objects that the vehicle 600 has located in that sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). The deep-learning infrastructure may run its own neural network to identify the objects and compare them with the objects identified by the vehicle 600 and, if the results do not match and the infrastructure concludes that the AI in the vehicle 600 is malfunctioning, the server(s) 678 may transmit a signal to the vehicle 600 instructing a fail-safe computer of the vehicle 600 to assume control, notify the passengers, and complete a safe parking maneuver.”), and determining whether a dangerous obstacle is contained in the updated obstacle list (given the BRI a POSITA would understand that the limitation “dangerous” connotes, inter alia “pedestrians”, crossing traffic, etc. as taught in for example only para: “[0119] A variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a complementary metal oxide semiconductor (“CMOS”) color imager. Another example may be a wide-view camera(s) 670 that may be used to perceive objects coming into view from the periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera is illustrated in FIG. 6B, there may be any number (including zero) of wide-view cameras 670 on the vehicle 600. In addition, any number of long-range camera(s) 698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. The long-range camera(s) 698 may also be used for object detection and classification, as well as basic object tracking.”). Regarding claim 14 and the limitation the autonomous control method of claim 1, wherein the behavior tree is a structure that allows for re-use of results of the same task at other nodes (given the BRI of the claim limitation behavior tree see the “Behavior Trees (BTs)” disclosed in for example only the figures above, especially Fig. 5 and paras: “[0026] Behavior Trees (BTs) represent a behavior-model based technology that has reached prominence in recent years for programming the behavior of autonomous agents, such as a robot, so that they may switch between a finite set of tasks so that contingency actions may be executed based on factors that occur during real-time task execution. For example, a behavior tree may comprise nodes classified as root nodes, control flow nodes, and task nodes. These nodes provide logic to guide switching between sequences of behaviors and/or sub-behaviors to handle task failures and/or unexpected challenges more gracefully (e.g., compared to switching between FSM states). However, under presently available technologies, BTs are agent centric. That is, a robot programmed to perform a mission under presently available BT technologies is programmed within the context of an individual actor performing a series of tasks. When completion of a complex task involves multiple autonomous mobile systems, and/or interactions with other task agents, current BT technologies are challenged because they do not holistically address actions of, and/or interactions between, a population of task agents involved in completing the mission. [0149] In some examples, the SoC(s) 604 may include a real-time ray-tracing hardware accelerator, such as described in U.S. patent application Ser. No. 16/101,232, filed on Aug. 10, 2018. The real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses. In some embodiments, one or more tree traversal units (TTUs) may be used for executing one or more ray-tracing related operations. [0223] Examples of the logic unit(s) 720 include one or more processing cores and/or components thereof, such as Data Processing Units (DPUs), Tensor Cores (TCs), Tensor Processing Units(TPUs), Pixel Visual Cores (PVCs), Vision Processing Units (VPUs), Graphics Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), Tree Traversal Units (TTUs), Artificial Intelligence Accelerators (AIA s), Deep Learning Accelerators (DLAs), Arithmetic-Logic Units (ALUs), Application-Specific Integrated Circuits (ASICs), Floating Point Units (FPUs), input/output (I/O) elements, peripheral component interconnect (PCI) or peripheral component interconnect express (PCIe) elements, and/or the like.”). Regarding claim 16 and the limitation An autonomous control apparatus for controlling operation of a vehicle, the autonomous control apparatus comprising: at least one memory that stores instructions; and at least one processor, wherein, by executing the instructions, the at least one processor acquires driving information including one or more of a memory path, a user’s driving method, a reference point, a forward target point, a next target point, an end point, and an obstacle list, from a vehicle or by detecting surroundings of the vehicle, configures a control method for the vehicle based on a behavior tree by using the driving information, configures a driving method for the vehicle based on the driving information and the control method, and controls the vehicle by applying the driving method (see the rejection of corresponding parts of claim 1 above incorporated herein by reference.). Regarding claim 17 and the limitation the autonomous control apparatus of claim 16, further comprising at least one sensor for acquiring the driving information or detecting the surroundings of the vehicle (see the rejection of corresponding parts of claim 1 above incorporated herein by reference, especially all of the sensors in Fig. 6A and the field of view of some of those sensors in Fig. 6B above). Regarding claim 18 and the limitation the autonomous control apparatus of claim 16, further comprising a controller for configuring the control method of the vehicle, configuring the driving method of the vehicle (see “controllers 636” and “system(s) on a chip (SoC) 604” in for example Figs. 6A-C, and paras: “[0127] The vehicle 600 may include one or more controller(s) 636, such as those described herein with respect to FIG. 6A. The controller(s) 636 may be used for a variety of functions. The controller(s) 636 may be coupled to any of the various other components and systems of the vehicle 600, and may be used for control of the vehicle 600, artificial intelligence of the vehicle 600, infotainment for the vehicle 600, and/or the like. [0128] The vehicle 600 may include a system(s) on a chip (SoC) 604. The SoC 604 may include CPU(s) 606, GPU(s) 608, processor(s) 610, cache(s) 612, accelerator(s) 614, data store(s) 616, and/or other components and features not illustrated. The SoC(s) 604 may be used to control the vehicle 600 in a variety of platforms and systems. For example, the SoC(s) 604 may be combined in a system (e.g., the system of the vehicle 600) with an HD map 622 which may obtain map refreshes and/or updates via a network interface 624 from one or more servers (e.g., server(s) 678 of FIG. 6D). In some embodiments, SoC 604 may execute algorithms for operating components of the vehicle 600 based on behavior models of the mission behavior model logic framework distributed to the vehicle 600.”), and controlling the vehicle (see the rejection of corresponding parts of claims 16 and 1 above incorporated herein by reference and especially Figures 6A-C above). Regarding claim 19 and the limitation A vehicle comprising the autonomous control apparatus of claim 16 (see the rejection of corresponding parts of claims 16 and 1 above incorporated herein by reference and especially Figures 6A-C above). Regarding claim 20 and the limitation A non-transitory computer-readable recording medium containing program instructions executed by a processor, the computer readable medium comprising: program instructions that acquire driving information including one or more of a memory path, a user’s driving method, a reference point, a forward target point, a next target point, an end point, and an obstacle list, from the vehicle or by detecting, by the at least one sensor, surroundings of a vehicle; program instructions that configure a control method for the vehicle based on a behavior tree by using the driving information; program instructions that configure a driving method for the vehicle based on the driving information and the control method; and program instructions that control the vehicle by applying the driving method (see the rejection of corresponding parts of claim 1 above incorporated herein by reference). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3-13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20240427325 A1 TO ELLIOTT; David et al. (hereinafter ELLIOTT) as applied to the claims above in view of US 20090192710 A1 to Eidehall; Andreas et al. (Eidehall). Regarding claim 3 and the limitation the autonomous control method of claim 2, wherein configuring the control method includes: if it is determined that the potential obstacle is not contained in the obstacle list, or if it is determined that the dangerous obstacle is not contained in the updated obstacle list (see the rejection of corresponding parts of claims 2 and 1 above incorporated herein by reference). ELLIOTT does not appear to expressly disclose it is determined whether the vehicle has reached a target area based on the distance between the front center point of the vehicle and the forward target point. In analogous art Eidehall teaches in for example, the figures below: PNG media_image9.png 329 216 media_image9.png Greyscale PNG media_image10.png 488 513 media_image10.png Greyscale PNG media_image11.png 417 548 media_image11.png Greyscale PNG media_image12.png 768 579 media_image12.png Greyscale And associated descriptive texts it is determined whether the vehicle has reached a target area based on the distance between the front center point of the vehicle and the forward target point (in Figures 6, 7 and 13 above with regard to, inter alia sensor 130 and one or more objects 230 as explained in for example paras: “[0087] Referring to FIG. 6, there is shown a schematic diagram of automobile indicated generally by 120. At a front region of the automobile 120, there is included a sensor arrangement 130 comprising one or more sensors for sensing in a sensing region indicated generally by 140 in front of the automobile 120. Additional parts included in the automobile 120 comprise a data processing unit 160, a steering arrangement 170 and one or more brake units 180 associated with wheels of the automobile 120, the one or more brake units 180 being operable to reduce velocity of the automobile 120 when applied; the data processing unit 160 is also conveniently referred to as being a "data processing arrangement". Processor 160 is operable to receive the sensor signals and to compute positions and relative velocities of one or more potentially hazardous objects 230 in the sensing region 140. Processor 160 is further operative to compute a value representative of a steering torque required to be applied to steer automobile 120 so as to avoid the hazardous objects 230, to retrieve a value representative of maximum torque available to steer the automobile 120, and to compare the value representative of the maximum torque available with the value representative of the steering torque required to avoid the hazardous object, and to decide how to intervene in response to the comparison. [0094] The data processing unit 160 is beneficially implemented in application-specific digital hardware and/or using computing hardware operable to execute software. Moreover, the data processing unit 160 is coupled to the sensor arrangement 130 for receiving the first and second sets of sensor signals there from. Furthermore, the data processing unit 160 is coupled to one or more actuators included in the steering arrangement 170. Actuators may, for example, be one or more electric motors operable to generate a steering torque to orientate front wheels 190 of the automobile 120. The steering torque supplied by the actuators is also felt in operation by a driver at a steering wheel 115 of the automobile 120 and beneficially provides a tactile indication of a potential risk represented by the one or more objects 230. In a potential crash scenario, the steering torque applied to the front wheels 190 is potentially capable of providing autonomous crash avoidance as will be elucidated in more detail later. Additionally, the data processing unit 160 is coupled to the one or more brake units 180 for autonomously applying braking forces to the automobile 120 in a potential crash scenario as will also be further elucidated later.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 4 and the limitation the autonomous control method of claim 3, wherein configuring the control method includes: if it is determined that the vehicle has not reached the target area, the control method for the vehicle is configured so as to control the speed and steering of the vehicle by applying a driving method applied when the user drives along the same path as the memory path toward the forward target point (a POSITA would understand the teachings of ELLIOTT in the rejection of corresponding parts of claims 3, 2, and 1 above incorporated herein by reference with regard to the vehicle “autonomously” travelling through “waypoints” to reach an ultimate “destination”). Regarding claim 5 and the limitation the autonomous control method of claim 3, wherein configuring the control method includes: if it is determined that the vehicle has reached the target area, it is determined whether the vehicle has reached a stop area based on the distance between the front center point of the vehicle and the end point (see the teachings of Eidehall Fig. 5 above, block 60 as explained in for example paras: “[0082] In FIG. 5 a block diagram of a collision avoidance and mitigation system 100 is shown. System 100 uses active steering and/or braking interventions to prevent or reduce the severity of a collision between a host vehicle and a target vehicle or other target object. The collision avoidance and mitigation system 100 includes a steering torque condition function block 50 for assessing whether a steering torque required to avoid a potential collision in the neighboring lane is within a range. The steering torque condition function block determines whether the required torque T.sub.req to avoid a collision with the target vehicle is above a minimum level T.sub.min and whether the required torque T.sub.req is below a maximum level T.sub.max. The minimum level may be selected to ensure that an intervention is made as late as possible, while still maintaining a safety margin. The maximum level may be selected to assure that the steering wheel torque superposed by an intervention controller will not exceed limitations with respect to legal requirements and/or driver override functionality. Driver override functionality provides the ability for a driver to manually override the superposed torque applied by the intervention controller. The driver must thus be able, using the steering wheel, to apply a torque larger than the maximum torque available from the intervention controller. [0083] The collision avoidance and mitigation system 100 further includes a host lane threat condition functional block 60 which is arranged at assess whether a required braking force B.sub.req to avoid an obstacle in the current lane is below a threshold braking force value. The threshold braking force value is selected such that sufficient braking force to stop the vehicle is available. A suitable braking force threshold value may be set between 0.2 and 0.7 g, and more precisely between 0.2 and 0.5 g.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 6 and the limitation the autonomous control method of claim 5, wherein configuring the control method includes: if it is determined that the vehicle has not reached the stop area, the next target point is set as the forward target point, and the end point is set as the forward target point if the end point is present within a preset distance from the reference point (see the teachings of Eidehall Fig. 13 above, arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 7 and the limitation the autonomous control method of claim 6, wherein configuring the control method includes: if it is determined that the vehicle has reached the stop area, the control method for the vehicle is configured so as to stop the vehicle (see the teachings of both ELLIOTT with regard to emergency braking and collision avoidance in for example paras [0155 and 0183] and Eidehall para [0083] above with regard to stopping the vehicle). Regarding claim 8 and the limitation the autonomous control method of claim 2, wherein configuring the control method includes: if it is determined that the dangerous obstacle is contained in the updated obstacle list, it is determined whether the dangerous obstacle is present in an emergency braking area based on the distance between the front center point of the vehicle and the dangerous obstacle (see the teachings of Eidehall Fig. 13 above, obstacle 1540 and arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 9 and the limitation the autonomous control method of claim 8, wherein configuring the control method includes: if it is determined that the dangerous obstacle is present in the emergency braking area, the control method for the vehicle is configured so as to stop the vehicle (see the teachings of Eidehall para [0083] “ The threshold braking force value is selected such that sufficient braking force to stop the vehicle is available. “). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 10 and the limitation the autonomous control method of claim 8, wherein the configuring of a control method includes: if it is determined that the dangerous obstacle is not present in the emergency braking area, one or more avoidance points are generated to determine whether a collision between the vehicle and the dangerous obstacle can be avoided (see the teachings of Eidehall Fig. 13 above, obstacle 1540 and arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 11 and the limitation the autonomous control method of claim 10, wherein configuring the control method includes: if it is determined that a collision between the vehicle and the dangerous obstacle can be avoided at the one or more avoidance points, an avoidance path toward the avoidance point(s) is generated, and a control method for the vehicle is configured so as to allow the vehicle to travel along the avoidance path (see the teachings of Eidehall Fig. 13 above, obstacle 1540 and arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 12 and the limitation the autonomous control method of claim 11, wherein the avoidance path is a path that is generated by selecting one of the avoidance points where it is deemed that a collision between the vehicle and the dangerous obstacle can be avoided, as a forward target point where the minimum distance between the front center point of the vehicle and the dangerous obstacle is expected to be the largest when the vehicle arrives at the avoidance point (see the teachings of Eidehall Fig. 13 above, obstacle 1540 and arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Regarding claim 13 and the limitation the autonomous control method of claim 10, wherein configuring the control method includes: if it is determined that a collision between the vehicle and the dangerous obstacle cannot be avoided at any avoidance point, the control method for the vehicle is configured so as to stop the vehicle (see the obviousness to combine Eidehall and the rejection of corresponding parts of claim 7 above incorporated herein by reference). Regarding claim 15 and the limitation the autonomous control method of claim 1, further comprising, given that the reference point is the front center point of the vehicle, updating the reference point to be the front center point of the vehicle if it is determined that the vehicle has reached a target area (see the teachings of Eidehall Fig. 13 above, obstacle 1540 and arrow 1500 as explained in for example para: “[0147] Referring next to FIG. 13, there is shown the automobile 120 traveling in a direction denoted by arrow 1510 defining a safest direction (F.sub.lane) for the automobile 20. Directly ahead of the automobile 120 is a stationary object 1540, for example road works or an abandoned vehicle, and traveling in a left-hand lane is a moving vehicle 1520 traveling in a direction denoted by an arrow 1530 substantially parallel to the safest direction (F.sub.lane) of the automobile 20. The data processing unit 60, by processing sensing signals output from the sensor arrangement 30, will identify that the object 1540 has a high closing velocity V.sub.Cl; relative to the automobile 120 in comparison to that of the vehicle 1520. The data processing unit 160 will in such a situation apply a strong torque in a left-hand direction as denoted by an arrow 1550 to steer the automobile 120 away from a collision with the stationary object 1540 towards the vehicle 1520 as denoted by an arrow 1500. The arrow 1500 thus defines an updated safest direction (Fl.sub.ane) for the automobile 120 even despite the safest direction (F.sub.lane) being now directed towards a vehicle.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the sensor location and anti-collision methodology disclosed in Eidehall with the anticollision methodology taught in ELLIOTT with a reasonable expectation of success because it would have “provided more reliable steering intervention in potential collision situations” as taught by Eidehall Para(s): “[0009] Even though systems for collision course prediction exists in prior art, there still is a need to further improve such systems. Thus, the present invention is concerned with the technical problem of providing advanced steering safety systems which are operable to provide more reliable steering intervention in potential collision situations.”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure as teaching, inter alia, the state of the art of METHODS AND APPARATUS FOR AUTONOMOUS CONTROL OF VEHICLES at the time of the invention. For example: US 20230326335 A1 to Ding; Kai et al. (Ding) teaches, inter alia a WRONG-WAY DRIVING MODELING in for example the ABSTRACT, Figures and/or Paragraphs below: “Aspects of the disclosure provide methods of modeling wrong-way driving of road users. For instance, log data including an observed trajectory of a first road user may be accessed. A first set of candidate lane segments for wrong-way driving may be identified from map information. A second set of candidate lane segments for not wrong-way driving may be identified from the map information. For each candidate lane segment in the first set and in the second set, a distance cost between the candidate lane segment and the observed trajectory may be determined. A candidate lane segment may be selected from at least one of the first set or the second set based on the determined distance costs. The selected candidate lane segment may be used to train a model to provide a likelihood of a second road user being engaged in wrong-way driving in a lane.”. US 20050033516 A1 to Kawasaki, Tomoya teaches, inter alia a Collision prediction apparatus in for example the ABSTRACT, Figures and/or Paragraphs below: “A collision prediction ECU of a collision prediction apparatus estimates a state of presence of a detected front obstacle. At this time, the collision prediction ECU estimates the state of presence on the basis of road shape data supplied from a navigation ECU of a navigation apparatus. Further, the collision prediction ECU checks and corrects the calculated road gradient value. At this time, the collision prediction ECU corrects the gradient value on the basis of road gradient data supplied from the navigation ECU. Further, the collision prediction ECU changes a collision avoidance time on the basis of travel environment data supplied from the navigation ECU. Moreover, the collision prediction ECU obtains an ETC gate pass-through signal from the navigation ECU and determines whether the vehicle is passing through the gate. The collision prediction apparatus performs collation prediction on the basis of the corrected values. [0081] For calculation of the above-described relative lateral distance Xr, the collision prediction ECU 11 first detects an instantaneous relative lateral distance X between the center axis of the vehicle and the side surface of the front obstacle. Specifically, the collision prediction ECU 11 detects the relative lateral distance X by use of the present relative distance Lnew to the front obstacle and the curve having a radius of curvature R2 which is estimated in step S103 and along which the front obstacle is present, and on the basis of the presence direction (heading direction) of the front obstacle in a state in which the position and heading direction of the vehicle are used as references.”. US 20220032454 A1 to Yang; Wei et al. teaches, inter alia MACHINE LEARNING CONTROL OF OBJECT HANDOVERS in for example the ABSTRACT, Figures and/or Paragraphs below: “A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.”. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LAWSON GREENE JR whose telephone number is (571)272-6876. The examiner can normally be reached on MON-THUR 7-5:30PM (EST). Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL L GREENE/Primary Examiner, Art Unit 3665 20260124
Read full office action

Prosecution Timeline

Dec 06, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601605
ELECTRONIC HORIZON FOR ADAS FUNCTION
2y 5m to grant Granted Apr 14, 2026
Patent 12595022
BICYCLE CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12595004
FRONT SPOILER ARRANGEMENT FOR A MOTOR VEHICLE, IN PARTICULAR FOR A TRUCK
2y 5m to grant Granted Apr 07, 2026
Patent 12589039
VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12583719
ANTI-COLLISION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
93%
With Interview (+17.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 859 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month