Prosecution Insights
Last updated: April 19, 2026
Application No. 18/886,543

VEHICLE CONTROL METHOD AND DEVICE

Non-Final OA §103
Filed
Sep 16, 2024
Examiner
LAROSE, RENEE MARIE
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Chung Ang University Industry Academic Cooperation Foundation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
475 granted / 599 resolved
+27.3% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
59.3%
+19.3% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 599 resolved cases

Office Action

§103
DETAILED CORRESPONDENCE This action is in response to the filing of the filing of Application on 09/16/2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2021/0380099) in view of Nister (US 20230341234). Claim 1, Lee discloses a method for controlling an autonomous vehicle, the method comprising: in response to a determination that a plurality of first driving path information corresponding to first state information is acquired, calculating a first reward corresponding to at least a portion of the first driving path information based on a result of performing a driving simulation according to the first driving path information [see Lee – p0003 – p0005, p0021, p0043 – p0044, path planning and control to account for position uncertainty for autonomous machine applications. Systems and methods are disclosed that generate and select candidate paths for a vehicle using an uncertainty representation for the vehicle; An uncertainty representation can be determined in real-time so the position of the trailer of the autonomous vehicle can be used in path planning and selection determinations. For example, a path generator (as part of a candidate path manager) may generate any number of paths having associated target locations (e.g., determined using any number of path generation techniques), and the uncertainty representations for the autonomous vehicle may be generated for the target locations to represent potential future footprints of the vehicle at the locations. As an example, two or more path candidates may be generated at each time step—e.g., a first path candidate may correspond to the autonomous vehicle staying in the current lane and a second path candidate may correspond to a lane change. These two paths may have been obtained by solving a constraint optimization problem such that each of the paths satisfy the constraints enforced thereon, such as a stochastic distance constraint. A path selector—e.g., of a planning layer—may thus select one of the paths to follow, and this information may be passed to control components of the vehicle for controlling the vehicle according to the path. The method 700, at block B706, includes computing costs for each of the plurality of candidate paths based on enforcing a constraint(s) in view of the uncertainty representations; the safety or collision avoidance consideration, may factor in to the final determination of a path for the vehicle 800. This consideration may be used to filter out paths, penalize (e.g., apply or attribute a negative or lower weight value to) paths where collision or possible collision events are predicted between the vehicle 800 and one or more actors, reward (e.g., apply or attribute a positive or higher weight value to) paths where an absence of a collision or possible collision event is predicted]; in response to a determination that second state information is acquired, generating at least one second driving path information corresponding to the second state information through the driving path generation network, calculating a second reward corresponding to at least a portion of the second driving path information based on a result of performing a driving simulation according to the second driving path information [see Lee – p0003 – p0005, p0021, p0043 – p0044, p0074, Figs, 3, 4A, 4B, 5 and 7 - path planning and control to account for position uncertainty for autonomous machine applications. Systems and methods are disclosed that generate and select candidate paths for a vehicle using an uncertainty representation for the vehicle; An uncertainty representation can be determined in real-time so the position of the trailer of the autonomous vehicle can be used in path planning and selection determinations. For example, a path generator (as part of a candidate path manager) may generate any number of paths having associated target locations (e.g., determined using any number of path generation techniques), and the uncertainty representations for the autonomous vehicle may be generated for the target locations to represent potential future footprints of the vehicle at the locations several paths (a first and a second is taught). As an example, two or more path candidates may be generated at each time step—e.g., a first path candidate may correspond to the autonomous vehicle staying in the current lane and a second path candidate may correspond to a lane change. These two paths may have been obtained by solving a constraint optimization problem such that each of the paths satisfy the constraints enforced thereon, such as a stochastic distance constraint. A path selector—e.g., of a planning layer—may thus select one of the paths to follow, and this information may be passed to control components of the vehicle for controlling the vehicle according to the path. The method 700, at block B706, includes computing costs for each of the plurality of candidate paths based on enforcing a constraint(s) in view of the uncertainty representations; the safety or collision avoidance consideration, may factor in to the final determination of a path for the vehicle 800. This consideration may be used to filter out paths, penalize (e.g., apply or attribute a negative or lower weight value to) paths where collision or possible collision events are predicted between the vehicle 800 and one or more actors, reward (e.g., apply or attribute a positive or higher weight value to) paths where an absence of a collision or possible collision event is predicted]; and in response to a determination that test state information is acquired, generating test driving path information corresponding to the test state information through the trained driving path generation network, and controlling the autonomous vehicle using the test driving path information [see p0074, at block B708, includes selecting a candidate path from the plurality of candidate paths based on the costs. For example, comfort, safety procedure execution analysis, obeying rules of the road, and/or other considerations may be factored in to determine which of the selected paths from the system 100 is a best or most suitable path for the vehicle 800 at a current time step]. Lee does not specifically teach and training a driving path generation network based on at least a portion of the first reward; and training the driving path generation network based on at least a portion of the second reward. However, Nister discloses an operating a lane planner to generate lane planner output data based on a state and probabilistic action space. The lane planner output data corresponds to lane detection and/or guidance data of a driving system that operates based on a hierarchical drive planning framework associated with the lane planner and other planning and control components. Further teaching, a connection between a first planning layer (e.g., route planning) and a second planning layer (e.g., lane planning). The drive planning based on equivalent times can be especially beneficial in drive planning for drive missions where there are multiple candidate routes. Implementation details for making the connection between layers may vary. At a high level, Global Navigation Satellite System (“GNSS”) coordinates along candidate routes with similar expected rewards can be used as guiding targets for a next layer of planning [p0027 – p0034, Fig 1A, 1B]. Nister continues to teach actions result in deterministic outcomes, and edges carry a positive cost (e.g., expected equivalent time spent), and there exists a single target and target reward. Drive planning (e.g., a full planning problem) includes operations that calculate the expected reward starting from any of the nodes. The calculation corresponds to finding the shortest path (e.g., an expected equivalent time spent seen as a distance of each edge) from each node to the target. The expected reward at each node is then the target reward minus the cost of the shortest path from the node to the target. If the expected reward is negative, the shortest path is more expensive than the target reward [see p0043, p0178 - teaching each node (either first, second, etc.) is measured for the positive reward, giving a drive plan; The vehicle 700 may include a GPU(s) 720 (e.g., discrete GPU(s), or dGPU(s)), that may be coupled to the SoC(s) 704 via a high-speed interconnect (e.g., NVIDIA's NVLINK). The GPU(s) 720 may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based on input (e.g., sensor data) from sensors of the vehicle 700]. It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Lee, to include training a driving path generation network based on at least a portion of the first reward; and training the driving path generation network based on at least a portion of the second reward, as suggested and taught by Nister, with a reasonable expectation of success, for the purpose of providing networks designed to select the "best path per reward" utilize reinforcement learning (RL) and graph-based algorithms to maximize utility, such as minimizing travel time, energy consumption, or risk, while accumulating rewards (e.g., speed bonuses, safety points). These systems, particularly in autonomous driving (AV), operate by creating a reward function that guides vehicle behavior—like lane keeping or overtaking—to navigate complex environments. Claim 9 is similarly rejected as Claim 1, see above. Claim(s) 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2021/0380099) in view of Nister (US 20230341234), and Trajectory Planning in Frenet Frame via Multi-Objective Optimization, IEEE, 2023 (hereinafter referred to as Frenet). Claim 2, Lee discloses the method of claim 1, but is silent to wherein calculating the first reward includes: generating the first driving path information corresponding to the first state information based on road information on a Frenet frame; generating first mapping driving path information by mapping the first driving path information to a Cartesian frame; and calculating the first reward corresponding to first driving path information on the Frenet frame that has been mapped to the first mapping driving path information on the Cartesian frame based on a result of performing a driving simulation according to the first mapping driving path information. However, Frenet teaches wherein calculating the first reward includes: generating the first driving path information corresponding to the first state information based on road information on a Frenet frame [see Frenet, path planning technology is broadly classified into two categories: the first category is global path planning, which aims to find the optimal or suboptimal path from the starting point to the destination point. The second category is local path planning, which involves obtaining environmental information through sensors in unknown or partially unknown environments, allowing autonomous driving vehicles to obtain a collision-free executable optimal planned path [see page 2, Col. 1]. Further disclosing, optimal trajectory is selected by minimizing a predefined cost function formulated for optimal path planning, taking into account comfort, safety, and road center line deviation. See Figure 4, The framework of the proposed algorithm consists of two main stages: trajectory generation in the Frenet frame and optimal trajectory selection. Figure 4 shows that collisions can occur on the generated trajectories, and that there are limitations on the vehicle’s motion and dynamic characteristics. To enhance the system’s response time, trajectories that fail to meet the constraints are eliminated through trajectory checking. The remaining trajectories are then presented as candidate paths for the subsequent module to choose the best path. After the trajectory check, a set of candidate trajectories is generated. However, the number of candidates remains large, and we must choose a single trajectory to follow. To do so, we develop a cost function that assesses each candidate [see Fig 4, page 7, Section E. Cost Function]. Frenet, discloses as shown in Figure 3, the ego vehicle often needs to adjust its driving trajectory due to the presence of other vehicles and obstacles, instead of strictly following the reference line (i.e., the road center line). When in the Cartesian frame, it can describe the current state of the ego vehicle; FIGURE 3. Transformation from Frenet frame to cartesian frame, the total loss of each trajectory is calculated, and the trajectory with the minimum total loss is chosen as the optimal trajectory. The cost function is composed of three indicators: comfort, trajectory safety, and trajectory anti-deviation [see Figs, 3 – 5 and pages 5 – 8]. In the simulation, a straight road, a curvy road, an intersection scenario and a ‘‘U’’ shaped road are built in a Python environment, and several static obstacles of different sizes are set up on the roads. The experiments in this paper are divided into two parts: the first part analyzes the impact of different cost functions on trajectory generation [see Section IV, page 8]. Frenet also teaches the total loss of each trajectory is calculated, and the trajectory with the minimum total loss is chosen as the optimal trajectory. The Examiner interprets this calculation of min. loss to be a reward, as Frenet teaches reinforcement learning methods employing diverse reward strategies [See page 2, Section A]. It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Lee, to include wherein calculating the first reward includes: generating the first driving path information corresponding to the first state information based on road information on a Frenet frame; generating first mapping driving path information by mapping the first driving path information to a Cartesian frame; and calculating the first reward corresponding to first driving path information on the Frenet frame that has been mapped to the first mapping driving path information on the Cartesian frame based on a result of performing a driving simulation according to the first mapping driving path information, as suggested and taught by Frenet, with a reasonable expectation of success, for the purpose of providing candidate paths free of collisions, also proposes a method to assess the safety of candidate trajectories based on their distance from obstacles, and the evaluation of the safety values of the candidate paths is In the candidate trajectory selection stage; With a new cost function to select the optimal trajectory, this cost function is designed to comprehensively consider comfort and safety. Claim 10 is similarly rejected as Claim 2, see above. Allowable Subject Matter Claim 3 – 8, 11 – 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The examiner has pointed out particular references contained in the prior art of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. Applicant should consider the entire prior art as applicable as to the limitations of the claims. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RENEE LAROSE whose telephone number is (313)446-4856. The examiner can normally be reached on Monday - Friday 8:30am - 5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Renee LaRose/Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Sep 16, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594948
APPARATUS AND METHOD FOR CONTROLLING VEHICLE MESSAGE OUTPUT
2y 5m to grant Granted Apr 07, 2026
Patent 12583367
Dual Release Actuator for Vehicle Seat and Method for Controlling the Same
2y 5m to grant Granted Mar 24, 2026
Patent 12582054
METHOD FOR CONTROLLING THE OPENING OF A WORK APPARATUS HAVING PAIRWISE ARRANGED PROCESSING DEVICES FOR VITICULTURE, AND WORK APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12578353
ROBOTIC SYSTEM AND METHOD FOR PRECISE ORGAN EXCISION TECHNOLOGY
2y 5m to grant Granted Mar 17, 2026
Patent 12565195
Method And Apparatus For Recording A Travel Trajectory For A Parking Maneuver
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
88%
With Interview (+8.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 599 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month