Prosecution Insights
Last updated: April 19, 2026
Application No. 18/735,692

AUTOMATED LOW VELOCITY VEHICLE PATH PLANNING

Non-Final OA §101§102§103
Filed
Jun 06, 2024
Examiner
TC 3600, DOCKET
Art Unit
3600
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Global Technology Operations LLC
OA Round
1 (Non-Final)
4%
Grant Probability
At Risk
1-2
OA Rounds
1y 1m
To Grant
5%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
5 granted / 142 resolved
-48.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 1m
Avg Prosecution
206 currently pending
Career history
348
Total Applications
across all art units

Statute-Specific Performance

§101
36.1%
-3.9% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claims 1-20 are currently pending and have been examined in this application. This communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are either directed to a method or an apparatus, which is one of the statutory categories of invention. (Step 1: YES) The examiner has identified claim 1, which substantially includes all the limitations of claim 11, as the claim that represents the claimed invention for analysis. The independent claim 1 recites the following limitations (bolded text corresponds to the abstract idea): A method for planning a vehicle path comprising: generating an initial perception data set using one or more vehicle sensors, wherein the initial perception data set includes data defining spatial positions of features extrinsic to a vehicle relative to the vehicle; determining an operational area of a parking operation based on the generated initial perception data set; identifying an initial pose of the vehicle, at least one goal pose of the vehicle, and a set of constraints using the initial perception data and the operational area; iteratively generating sets of path segments using a reinforcement learning algorithm, wherein each completed set of path segments is configured to reposition the vehicle from the initial pose to one of the at least one goal poses, and wherein iteratively generating the sets of path segments includes determining a total path score for each generated set of path segments; selecting a set of path segments in the iteratively generated sets of path segments having a best total path score; generating a set of path points using the path segments; and providing the set of path points to an automated parking controller operation within the vehicle. Under its broadest reasonable interpretations, this method is generating a set of path points using a selection of path segments. If the broadest reasonable interpretation of a claim limitations entails performance in the human mind, then it falls within the mental processes grouping of abstract ideas. Therefore, the claim recites an abstract idea. (Step 2A-Prong 1: Yes. The claims are abstract.) For example, a human may identify an initial position of a vehicle and a desired end position of the vehicle, and determine a set of paths to position the vehicle to the desired end position of the vehicle. This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). In particular, the claims recite additional elements of generating an initial perception data set using one or more vehicle sensors and iteratively generating sets of path segments using a reinforcement learning algorithm. The steps of generating an initial perception data set using one or more vehicle sensors and iteratively generating sets of path segments using a reinforcement learning algorithm are recited at a high level of generality and do not comprise any of the above additional elements that individually or in combination, have integrated the judicial exception into a practical application. Specifically, the steps of generating an initial perception data set using one or more vehicle sensors and iteratively generating sets of path segments using a reinforcement learning algorithm constitute mere data gathering and is insignificant extra-solution activity. There are no additional elements that apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. (Step 2A-Prong 2: No. The additional claimed elements are not integrated into a practical application.) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an "inventive concept") to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use. The additional elements claimed amount to insignificant extra-solution activities. See 2106.05(g) for more details. Generally linking the use of the judicial exception to a particular technological environment or field of use, cannot provide an inventive concept-rendering the claim patent ineligible. Furthermore, the limitations step of “providing the set of path points to an automated parking controller operation within the vehicle”, is not more than the judicial exception, because as detailed in Electric Power Group, additional elements that are used to simply output results do not amount to significantly more than the abstract idea itself. Thus claim 1 and similarly other independent claims are not patent eligible. (Step 2B: NO. The claims do not provide significantly more) The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 10-16, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sugiarto (US20240317214). Claim 1. Sugiarto teaches the following limitations: A method for planning a vehicle path comprising: generating an initial perception data set using one or more vehicle sensors, wherein the initial perception data set includes data defining spatial positions of features extrinsic to a vehicle relative to the vehicle; (Sugiarto – [0020] In addition, the parking system 114 or another component of the vehicle 102 can use the sensors 112 to obtain an initial pose of the vehicle 102, a goal pose of the vehicle 102 to park in an available space 108 or selected parking space, and an obstacle map for the environment 100 that includes the other vehicles 106 and other obstacles.) determining an operational area of a parking operation based on the generated initial perception data set; (Sugiarto – [0002] An example system includes a processor that can obtain an initial pose and a goal pose of a host vehicle and an obstacle map for a parking environment. The initial pose provides a source node for use in a parking path algorithm. The goal pose indicates a goal node for use in the parking path algorithm and indicates a parking pose of the host vehicle in a selected parking space.) identifying an initial pose of the vehicle, at least one goal pose of the vehicle, and a set of constraints using the initial perception data and the operational area; (Sugiarto – [0036] The graph network 300 also includes edges 308 that connect two nodes from among the source node 302, the goal node 304, and the intermediate nodes 306. The edges 308 represent potential maneuvers that the vehicle 102 can perform between two nodes subject to the non-holonomic constraints of vehicle 102) iteratively generating sets of path segments using a reinforcement learning algorithm, wherein each completed set of path segments is configured to reposition the vehicle from the initial pose to one of the at least one goal poses, and wherein iteratively generating the sets of path segments includes determining a total path score for each generated set of path segments; (Sugiarto – [0016] The parking system can then use a parking path algorithm and the obstacle map to determine first waypoints for a parking path from the source node toward the goal node. In response to a number of run-time iterations of the parking path algorithm being greater than an iteration threshold, the parking system can select an intermediate source node that is closest to the goal pose; [0036] The parking path selector 118 can assign the weights based on the steering angle required to reach the second node, the proximity of the nodes and the edge to obstacles in the obstacle map, whether a change of direction is required (e.g., from forward driving to reverse), and the change in the steering angle to travel along the edge. The goal of the parking path selector 118 is to find a parking path from the source node 302 to the goal node 304 with the minimum accumulated cost.) selecting a set of path segments in the iteratively generated sets of path segments having a best total path score; generating a set of path points using the path segments; and providing the set of path points to an automated parking controller operation within the vehicle. (Sugiarto – [0023] The parking path selector 118 can determine a maneuver type (e.g., front-in parking, back-in parking, single-turn maneuver, two-turn maneuver) and the parking path 110 for parking the vehicle 102 in the selected parking space. The parking path selector 118 may use a modified Dijkstra algorithm (e.g., a variant of the Hybrid A* algorithm) to determine the parking path for the vehicle 102 to park in the available space 108. In other implementations, the parking path selector 118 may use another Dijkstra variant, including A*, Anytime A*, D*, or D* Lite. The modified Dijkstra algorithm may be tuned to determine the parking path 110 that is near the most-optimal parking path but in a computationally-efficient manner.) Claim 2. The method of claim 1, wherein the automated parking controller operation causes the vehicle to move along the set of path points from the initial pose to a position within a predefined range of the one of the at least one goal poses. (Sugiarto – [0026] For example, the processors 204 can execute the instructions on the CRM 206 to configure the processors 204 to control, based on sensor data, an autonomous or semi-autonomous driving system of the vehicle 102 to cause the vehicle 102 to park in a selected parking space using the parking path 110) Claim 3. The method of claim 2, further comprising replanning the parking operation as the vehicle moves along the set of path points in response to a new perception data set where the new perception data set varies from the initial perception data set. (Sugiarto – [0016] For example, a parking system can obtain an initial pose (e.g., the source node), a goal pose (e.g., the goal node), and an obstacle map for the parking environment. The parking system can then use a parking path algorithm and the obstacle map to determine first waypoints for a parking path from the source node toward the goal node. In response to a number of run-time iterations of the parking path algorithm being greater than an iteration threshold, the parking system can select an intermediate source node that is closest to the goal pose. The parking path algorithm can then be used to determine second waypoints for a parking path from the intermediate source node to the goal node. The host vehicle can then be controlled to park in the selected parking space using the first waypoints and the second waypoints as the parking path) Claim 4. The method of claim 1, wherein at least a portion of the completed set of path segments are defined by a sequentially ordered combination of simple actions and complex actions. (Sugiarto – [0023] The parking path selector 118 can determine a maneuver type (e.g., front-in parking, back-in parking, single-turn maneuver, two-turn maneuver) and the parking path 110 for parking the vehicle 102 in the selected parking space.) Claim 5. The method of claim 4, wherein each simple action outputs one of an arc and a straight line, wherein the arc is defined by an arc radius and an arc length. (Sugiarto – [0060] The entry turning radius represents the radius of the travel path to enter the available space 108 from the current lateral position of the vehicle 102. The longitudinal distance can represent the distance from the front of the vehicle 102 to the lateral center of available space 108. The goal pose can be defined to place the vehicle 102 in the longitudinal center and the lateral center of the available space 108.) Claim 6. The method of claim 5, wherein each complex action defines a specific vehicle maneuver and comprises a stored algorithm configured to output a predefined set of path primitives ordered to achieve the specific vehicle maneuver. (Sugiarto – [0023] The parking path selector 118 can determine a maneuver type (e.g., front-in parking, back-in parking, single-turn maneuver, two-turn maneuver) and the parking path 110 for parking the vehicle 102 in the selected parking space. The parking path selector 118 may use a modified Dijkstra algorithm (e.g., a variant of the Hybrid A* algorithm) to determine the parking path for the vehicle 102 to park in the available space 108. In other implementations, the parking path selector 118 may use another Dijkstra variant, including A*, Anytime A*, D*, or D* Lite. The modified Dijkstra algorithm may be tuned to determine the parking path 110 that is near the most-optimal parking path but in a computationally-efficient manner.) Claim 10. The method of claim 1, wherein the at least one goal pose comprises a set of goal poses included a finished parking position pose and at least one close pose, wherein the close pose is a pose from which the vehicle is able to be maneuvered to a finished parking position using at most three total combined simple actions and complex actions. (Sugiarto – [0033] In other words, the source node 302 may represent the current pose (e.g., position and heading) of the vehicle 102. The goal node 304 represents the final parking pose of the vehicle 102 in the selected parking space at the end of the parking path. In other words, the goal node 304 can represent the final pose (e.g., position and heading) of the vehicle 102 once it is parked in the selected parking space. The source node 302 can be obtained from a vehicle state estimator of vehicle 102. The goal node 304 can be obtained from the parking space selector 116.; [0034] The intermediate nodes 306 (e.g., intermediate node 306-1, 306-2, 306-3, 306-4, 306-5, and 306-6) represent potential intermediate poses of vehicle 102 along the parking path 110. The parking system 114 or the parking path selector 118 can identify the intermediate nodes 306 as it runs its parking path algorithm using an obstacle map of the parking environment obtained from a perception system of vehicle 102.) Claims 11-16 and 20. Rejected under the same rationale as claims 1-6 and 10 respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7-9 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Sugiarto (US20240317214) in view of He (US20200363813). Claim 7. Sugiarto teaches the method of claim 1, but fails to explicitly teach the following limitations: wherein iteratively generating sets of path segments using a reinforcement learning algorithm comprises: selecting a path primitive and estimating an estimated score reward corresponding to the path primitive using an estimation algorithm, responding to the estimated score reward exceeding a predetermined amount by simulating operation of the path primitive in a real world model and determining a simulated reward based on an output of the simulation, and placing the selected path primitive as a next path primitive in the set of path primitives and adding the simulated reward to a total reward of the set of path primitives. However, He teaches: The method of claim 1, wherein iteratively generating sets of path segments using a reinforcement learning algorithm comprises: selecting a path primitive and estimating an estimated score reward corresponding to the path primitive using an estimation algorithm, responding to the estimated score reward exceeding a predetermined amount by simulating operation of the path primitive in a real world model and (He – [0062] The environment model can model a perceived environment of the ADV, vehicle dynamics, vehicle control limits, and a reward grading or scoring metric, such that the environment model can generate an actual reward and a next trajectory state based on an action and a current trajectory state for the ADV. Thus, the RL agent and the environment model can iteratively generate a number of next trajectory states (e.g., an output trajectory) and a number of controls/actions. The scoring metric can include a scoring scheme to evaluate whether the RL agent planned a trajectory with a final trajectory state at the destination spot, whether the trajectory is smooth, whether the trajectory avoids all the perceived obstacles.) determining a simulated reward based on an output of the simulation, and placing the selected path primitive as a next path primitive in the set of path primitives and adding the simulated reward to a total reward of the set of path primitives. (He – [0079] Environment model 1300 can model a simulated environment of an ADV (e.g., environment model (state) 1109 or environment model (reward) 1111 of FIG. 11) to interact with an RL agent to speed up reinforcement learning; Environment model 1300 may also derive a reward strategy to score different trajectories. The reward strategy can score a trajectory for whether the trajectory reached a final location (e.g., xF, final trajectory state) 1305, whether the acceleration for the trajectory is smooth and the trajectory does not zig-zag, and whether the trajectory avoids all the obstacles.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sugiarto with He in order to plan an open space trajectory for autonomous driving vehicles (ADVs) (He – [0001]) Claim 8. The method of claim 7, wherein the estimation algorithm is an output of a neural network and wherein the estimated score reward, simulated score reward, selected path primitive and the set of constraints are added to an updated training data set of the neural network. (He – [0083] In one embodiment, the RL agent includes an actor neural network and a critic neural network, and wherein the actor and critic neural networks are deep neural networks. In another embodiment, the actor neural network includes a convolutional neural network.) Claim 9. The method of claim 8, further comprising retraining the neural network and updating the estimation algorithm using the updated training data set and replacing the estimation algorithm with an updated estimation algorithm determined using the retrained neural network. (He – [0025] According to a third aspect, a system generates a plurality of driving scenarios to train a RL agent and replays each of the driving scenarios to train the RL agent by: applying a RL algorithm to an initial state of a driving scenario to determine a number of control actions from a number of discretized control/action options for the ADV to advance to a number of trajectory states which are based on a number of discretized trajectory state options, determining a reward prediction by the RL algorithm for each of the controls/actions, determining a judgment score for the trajectory states, and updating the RL agent based on the judgment score) Claims 17-19. Rejected under the same rationale as claims 7-9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT FENG whose telephone number is (703)756-4715. The examiner can normally be reached M-F 8:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NAVID MEHDIZADEH can be reached on (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT FENG/Examiner, Art Unit 3669 /TODD MELTON/Primary Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103
Jan 28, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8813663
SEEDING MACHINE WITH SEED DELIVERY SYSTEM
2y 5m to grant Granted Aug 26, 2014
Patent null
Interconnection module of the ornamental electrical molding
Granted
Patent null
SYSTEMS AND METHODS FOR ENTITY SPECIFIC, DATA CAPTURE AND EXCHANGE OVER A NETWORK
Granted
Patent null
Systems and Methods for Performing Workflow
Granted
Patent null
DISTRIBUTED LEDGER PROTOCOL TO INCENTIVIZE TRANSACTIONAL AND NON-TRANSACTIONAL COMMERCE
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
4%
Grant Probability
5%
With Interview (+1.5%)
1y 1m
Median Time to Grant
Low
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month