Prosecution Insights
Last updated: April 19, 2026
Application No. 18/781,277

STATION-TIME SCENE REPRESENTATION FOR MACHINE LEARNING (ML)-BASED PLANNING

Non-Final OA §101§102§103
Filed
Jul 23, 2024
Examiner
YANG, WENYUAN
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
85%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
90 granted / 133 resolved
+15.7% vs TC avg
Strong +18% interview lift
Without
With
+17.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
166
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 133 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This Office Action is in response to Applicant's Application filed on 7/23/2024. Claims 1-20 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/1/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11, 13-18, 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis-Step 1 Claims 1-12 are directed to An apparatus configured for trajectory planning for an object (i.e., a machine). Therefore, claims 1-12 are within at least one of the four statutory categories. 101 Analysis-Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for reminder of the 101 rejection. Claim 1 recites: An apparatus configured for trajectory planning for an object, comprising: one or more memories configured to store information corresponding to a station-time (ST) scene representing 1) a displacement of one or more agents over time with respect to a reference point corresponding to a current position of the object and 2) a target location of the object; and one or more processors, coupled to the one or more memories, configured to: obtain the ST scene; input the ST scene into a first machine learning model; output, by the first machine learning model, based on the input ST scene, a first target trajectory for the object to follow to occupy the target location; send the first target trajectory to a second machine learning model; and obtain, from the second machine learning model, a second target trajectory for the object to follow to occupy the target location. The examiner submits that the foregoing bolded limitation(s) constitute a "mental process" and/or “certain methods of organizing human activity” because under its broadest reasonable interpretation, the claim covers performance of the limitation by a user or in the human mind. For example, “output … based on the input ST scene, a first target trajectory for the object to follow to occupy the target location” in the context of this claim encompasses the user mentally determining a trajectory. Similarly, the limitation of " obtain … a second target trajectory for the object to follow to occupy the target location " in the context of this claim encompasses the user mentally determining a trajectory. Accordingly, the claim recites at least one abstract idea. 101 Analysis-Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim as a whole, integrates the abstract into a partial application. As noted in the 2019 PEG, it must be determined whether there are any additional elements recited in the claim beyond the judicial exception(s), and whether those additional elements integrate the exception into a practical application of the exception. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): An apparatus configured for trajectory planning for an object, comprising: one or more memories configured to store information corresponding to a station-time (ST) scene representing 1) a displacement of one or more agents over time with respect to a reference point corresponding to a current position of the object and 2) a target location of the object; and one or more processors, coupled to the one or more memories, configured to: obtain the ST scene; input the ST scene into a first machine learning model; output, by the first machine learning model, based on the input ST scene, a first target trajectory for the object to follow to occupy the target location; send the first target trajectory to a second machine learning model; and obtain, from the second machine learning model, a second target trajectory for the object to follow to occupy the target location. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “one or more memories configured to”; “one or more processors, coupled to the one or more memories, configured to”; “by the first machine learning model”; “from the second machine learning model” the examiner submits that these limitations are mere instructions to apply the above noted abstract idea by merely using a computer to perform the process (MPEP § 2106.05). In particular, one or more memories recited at a high-level of generality (i.e., as memory performing a generic computer function of storing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. One or more processors recited at a high-level of generality (i.e., as processors performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. machine learning model recited at a high-level of generality (i.e., as computer model performing a generic computer function of processing data and determining result) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Regarding the additional limitations of “store information corresponding to a station-time (ST) scene representing 1) a displacement of one or more agents over time with respect to a reference point corresponding to a current position of the object and 2) a target location of the object”; “obtain the ST scene”; “input the ST scene into a first machine learning model”; “send the first target trajectory to a second machine learning model”, the examiner submits that these limitations are mere data gathering in conjunction with a law of nature or abstract idea (MPEP § 2106.05). In particular, “store information”, “obtain the ST scene”; “input the ST scene”; “send the first target trajectory” indicate pre-solution activity such that it amounts no more than a step of gathering data for use in a claimed process. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add no thing that is nor already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2 106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis-Step 2B Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “one or more memories configured to”; “one or more processors, coupled to the one or more memories, configured to”; “by the first machine learning model”; “from the second machine learning model” amounts to nothing more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer component cannot provide an inventive concept. Furthermore, regarding the additional limitation of “store information corresponding to a station-time (ST) scene representing 1) a displacement of one or more agents over time with respect to a reference point corresponding to a current position of the object and 2) a target location of the object”; “obtain the ST scene”; “input the ST scene into a first machine learning model”; “send the first target trajectory to a second machine learning model”, the examiner submits that the limitation merely adds insignificant extra-solution activity to the at least one abstract idea as previously discussed. Hence the claim is not patent eligible. Therefore, claim(s) 1 is/are ineligible under 35 U.S.C. 101. Regarding Claim 2, the claim recites further narrowing limitation on the “the ST scene comprises a grid of cells” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. Regarding Claim 3, the claim recites further narrowing limitation on the “each of one or more cells of the grid of cells” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. Regarding Claim 4, the claim recites further narrowing limitation on the “receive sensor data comprising information about the object and the one or more agents” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. The claim recites “generate, for each cell of the grid of cells, the respective vector of features based on the sensor data” which further narrowing the abstract idea and fail to integrate the abstract idea into a practical application. Regarding Claim 5, the claim recites further narrowing limitation on the “generate the sensor data” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. Regarding Claim 6, the claim recites “one or more sensors are integrated into the object” which is mere instructions to apply the exception using a generic computer component and fail to integrate the abstract idea into a practical application. Regarding Claim 7, the claim recites further narrowing limitation on the “each of one or more second cells of the grid of cells is associated with a default value indicating an absence of features” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. Regarding Claim 8, the claim recites further narrowing limitation on the “obtain one or more trajectories for the one or more agents; obtain the target location of the object; obtain the current position of the object; and” which is merely insignificant extra solution activity and fail to integrate the abstract idea into a practical application. The claim recites “generate the ST scene” which further narrowing the abstract idea and fail to integrate the abstract idea into a practical application. Regarding Claim 9, the claim recites “generate the ST scene” which further narrowing the abstract idea and fail to integrate the abstract idea into a practical application. Regarding Claim 10, the claim recites “the ST scene is based on at least one environmental occlusion” which further narrowing the abstract idea and fail to integrate the abstract idea into a practical application. Regarding Claim 11, the claim recites “the object comprises a vehicle and the second machine learning model is a planning algorithm” which is mere instructions to apply the exception using a generic computer component and fail to integrate the abstract idea into a practical application. As per claim 13, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 1 and therefore is rejected on the same basis. As per claim 14, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 2 and therefore is rejected on the same basis. As per claim 15, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 3 and therefore is rejected on the same basis. As per claim 16, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 4 and therefore is rejected on the same basis. As per claim 17, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 8 and therefore is rejected on the same basis. As per claim 18, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 11 and therefore is rejected on the same basis. As per claim 20, it recites A non-transitory computer-readable medium comprising instructions, which when executed by one or more processors, cause the one or more processors to perform operations for trajectory planning for an object having limitations similar to those of claim 1 and therefore is rejected on the same basis. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 11-13, 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Haynes (US20190025841A1). Regarding claim 1, Haynes teaches An apparatus configured for trajectory planning for an object, comprising: one or more memories (Haynes: Para 81 “The vehicle computing system 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause vehicle computing system 102 to perform operations”) configured to store information corresponding to a station-time (ST) scene representing 1) a displacement of one or more agents over time with respect to a reference point corresponding to a current position of the object (Haynes: Para 89 “The prediction system 104 can predict the future locations of the objects based at least in part on perception information (e.g., the state data for each object) received from the perception system 103, the map data 126, the sensor data, and/or any other data that describes the past and/or current state of the objects, the autonomous vehicle 10, the surrounding environment, and/or relationship(s) therebetween.”) and 2) a target location of the object(Haynes: Para 95 “the scenario generation system 204 can describe where each actor or other object in a scene is attempting to go. More particularly, the scenario generation system 204 can generate, for each object, one or more goals, where each goal corresponds to a set of decisions that the object must make to get somewhere or otherwise achieve a desired location”); and one or more processors, coupled to the one or more memories(Haynes: Para 81 “The vehicle computing system 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause vehicle computing system 102 to perform operations”), configured to: obtain the ST scene(Haynes: Para 140 “based on the state data, the feature data, and/or information descriptive of the goal(s) for each object, the prediction system of the autonomous vehicle 602 (e.g., through use of the trajectory development model 312) can generate a predicted trajectory by which each object achieves each goal selected for such object”); input the ST scene into a first machine learning model(Haynes: Para 140 “based on the state data, the feature data, and/or information descriptive of the goal(s) for each object, the prediction system of the autonomous vehicle 602 (e.g., through use of the trajectory development model 312) can generate a predicted trajectory by which each object achieves each goal selected for such object”); output, by the first machine learning model, based on the input ST scene, a first target trajectory for the object to follow to occupy the target location; (Haynes: Para 140 “based on the state data, the feature data, and/or information descriptive of the goal(s) for each object, the prediction system of the autonomous vehicle 602 (e.g., through use of the trajectory development model 312) can generate a predicted trajectory by which each object achieves each goal selected for such object”) send the first target trajectory to a second machine learning model(Haynes: Para 141 “the scenario development system 206 can further include a trajectory scoring model 314 that generates a score for each predicted trajectory provided by the trajectory development model 312. For example, the trajectory scoring model 314 can be a machine-learned model trained or otherwise configured to receive a trajectory and provide a score indicative of, for example, how realistic or achievable such trajectory is for the object. For example, the trajectory scoring model 314 can be trained on training data that includes trajectories labelled as a valid trajectory (e.g., an observed trajectory) or an invalid trajectory (e.g., a synthesized trajectory)”); and obtain, from the second machine learning model, a second target trajectory for the object to follow to occupy the target location(Haynes: Para 142 “the score generated by the trajectory scoring model 314 for each predicted trajectory can be compared to a threshold score. In some implementations, each trajectory that is found to be satisfactory (e.g., receives a score higher than the threshold score) can be used (e.g., passed on to the motion planning system), as shown at 318. Alternatively, a certain number of the highest scoring trajectories can be used at 318.”). Regarding claim 11, Haynes teaches The apparatus of claim 1, wherein the object comprises a vehicle(Haynes: Para 6 “the present disclosure is directed to an autonomous vehicle”) and the second machine learning model is a planning algorithm for autonomous vehicle decision-making(Haynes: Para 142 “the score generated by the trajectory scoring model 314 for each predicted trajectory can be compared to a threshold score. In some implementations, each trajectory that is found to be satisfactory (e.g., receives a score higher than the threshold score) can be used (e.g., passed on to the motion planning system), as shown at 318. Alternatively, a certain number of the highest scoring trajectories can be used at 318.”). Regarding claim 12, Haynes teaches The apparatus of claim 1, wherein the one or more processors are configured to: obtain a plurality of ST scenes, each ST scene corresponding to a different driving scenario(Haynes: Para 35 “some or all of the machine-learned models included in or employed by the prediction systems described herein can be trained using log data collected during actual operation of autonomous vehicles on travelways (e.g., roadways). For example, the log data can include sensor data and/or state data for various objects observed by an autonomous vehicle (e.g., the perception system of an autonomous vehicle) and also the resulting trajectories or other motion data for each object that occurred subsequent and/or contemporaneous to collection of the sensor data and/or generation of the state data. Thus, the log data can include a large number of real-world examples of object trajectories or motion paired with the data collected and/or generated by the autonomous vehicle (e.g., sensor data, map data, perception data, etc.) contemporaneous to such motion. Training the machine-learned models on such real-world log data can enable the machine-learned models to predict object goals and/or trajectories which better mirror or mimic real-world object behavior and/or score object goals and/or trajectories based on their similarity to or approximation of real-world object behavior”); and train the first machine learning model using the plurality of ST scenes(Haynes: Para 35 “some or all of the machine-learned models included in or employed by the prediction systems described herein can be trained using log data collected during actual operation of autonomous vehicles on travelways (e.g., roadways). For example, the log data can include sensor data and/or state data for various objects observed by an autonomous vehicle (e.g., the perception system of an autonomous vehicle) and also the resulting trajectories or other motion data for each object that occurred subsequent and/or contemporaneous to collection of the sensor data and/or generation of the state data. Thus, the log data can include a large number of real-world examples of object trajectories or motion paired with the data collected and/or generated by the autonomous vehicle (e.g., sensor data, map data, perception data, etc.) contemporaneous to such motion. Training the machine-learned models on such real-world log data can enable the machine-learned models to predict object goals and/or trajectories which better mirror or mimic real-world object behavior and/or score object goals and/or trajectories based on their similarity to or approximation of real-world object behavior”). As per claim 13, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 1 and therefore is rejected on the same basis. As per claim 18, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 11 and therefore is rejected on the same basis. As per claim 19, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 12 and therefore is rejected on the same basis. As per claim 20, it recites A non-transitory computer-readable medium comprising instructions, which when executed by one or more processors, cause the one or more processors to perform operations for trajectory planning for an object having limitations similar to those of claim 1 and therefore is rejected on the same basis. Haynes teaches A non-transitory computer-readable medium comprising instructions(Haynes: Para 35 “The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause vehicle computing system 102 to perform operations”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2-10, 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haynes (US20190025841A1) in view of Ostafew (US20190369616A1). In regards to claim 2, Haynes teaches The apparatus of claim 1 Yet Haynes do not explicitly teach the ST scene comprises a grid of cells, each cell of the grid of cells corresponding to a discrete displacement from the reference point at a discrete time. However, in the same field of endeavor, Ostafew teaches the ST scene comprises a grid of cells, each cell of the grid of cells corresponding to a discrete displacement from the reference point at a discrete time(Ostafew: Fig. 10; Para 170 “The drivable area of the AV 1002 is divided into bins. Each bin has a center point, such as a center point 1006. The center points can be equally spaced. For example, the center points can be approximately two meters apart. The left and right boundary of each bin can be related to the heading of the coarse driveline 1004. A right boundary 1018 and a left boundary 1020 illustrate the boundaries of a bin 1022” ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify The apparatus of Haynes with the feature of the ST scene comprises a grid of cells, each cell of the grid of cells corresponding to a discrete displacement from the reference point at a discrete time disclosed by Ostafew. One would be motivated to do so for the benefit of “detect static objects and/or predict the trajectories of other nearby dynamic objects to plan a trajectory such that autonomous vehicles can safely traverse the transportation network and avoid such objects” (Ostafew: Para 3). In regards to claim 3, the combination of Haynes and Ostafew teaches The apparatus of claim 2, and Ostafew further teaches wherein each of one or more cells of the grid of cells is associated with a respective vector of features comprising one or more of: a respective indication of at least one type of occlusion(Ostafew: Fig. 10 Element 1028 and 1030; Para 172 “On the other hand, a left boundary 1012 of a bin 1026 is not aligned with the drivable area because a cutout 1028 is excluded from the drivable area; and a right boundary 1014 of the bin 1026 is not aligned with the drivable area because a cutout 1030 is excluded from the drivable area” ). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 4, the combination of Haynes and Ostafew teaches The apparatus of claim 3, and Haynes further teaches the one or more processors are configured to: receive sensor data comprising information about the object and the one or more agents(Haynes: Para 21 “the perception system can receive sensor data from one or more sensors that are coupled to or otherwise included within the autonomous vehicle. As examples, the one or more sensors can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors. The sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle”) while Ostafew further teaches generate, for each cell of the grid of cells, the respective vector of features based on the sensor data(Ostafew: Fig. 14 and 15; Element 1420 and 1520; Para 214 “A vehicle 1404 is predicted to be moving from the right shoulder of the road (or from the lane to the right of the lane that includes the AV 1402) into the path of the AV along a path 1420. As such, the vehicle 1404 is initially classified as a lateral constraint. The predicted path of the vehicle 1404 is a path 1420, which is near (e.g., adjacent to) the coarse driveline 1403. As such, the module 532 continues to classify the vehicle 1404 as a lateral constraint”). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 5, the combination of Haynes and Ostafew teaches The apparatus of claim 4, and Ostafew further teaches one or more sensors, coupled to the one or more processors, wherein the one or more sensors are configured to generate the sensor data comprising at least one of: one or more images, one or more point clouds, one or more coordinates, or one or more velocities(Haynes: Para 21 “the perception system can receive sensor data from one or more sensors that are coupled to or otherwise included within the autonomous vehicle. As examples, the one or more sensors can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors. The sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle”). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 6, the combination of Haynes and Ostafew teaches The apparatus of claim 5, and Haynes further teaches the one or more sensors are integrated into the object(Haynes: Para 21 “the perception system can receive sensor data from one or more sensors that are coupled to or otherwise included within the autonomous vehicle”). In regards to claim 7, the combination of Haynes and Ostafew teaches The apparatus of claim 3, and Ostafew further teaches each of one or more second cells of the grid of cells is associated with a default value indicating an absence of features(Ostafew: Fig. 10; Para 170 “The drivable area of the AV 1002 is divided into bins. Each bin has a center point, such as a center point 1006”; i.e. The drivable area of the AV indicating an absence of features). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 8, Haynes teaches The apparatus of claim 2, and Ostafew further teaches wherein to obtain the ST scene, the one or more processors are configured to: obtain one or more trajectories for the one or more agents(Ostafew: Fig. 14 Element 1420; Para 214 “A vehicle 1404 is predicted to be moving from the right shoulder of the road (or from the lane to the right of the lane that includes the AV 1402) into the path of the AV along a path 1420. As such, the vehicle 1404 is initially classified as a lateral constraint. The predicted path of the vehicle 1404 is a path 1420, which is near (e.g., adjacent to) the coarse driveline 1403. As such, the module 532 continues to classify the vehicle 1404 as a lateral constraint”); obtain the target location of the object(Ostafew: Fig. 14 Element 1403 and 1410; Para 213 “In the example 1400 of FIG. 14, an AV 1402 is moving along a coarse driveline 1403. No static objects are found. Accordingly, a left boundary 1417 and a right boundary 1418, which are the computed boundaries of the drivable area adjusting for static objects as described with respect to FIGS. 10-12, coincide with the boundaries of the drivable area”; Para 215 “The module 532 can determine (e.g., predict) the locations of the AV 1402 at different discrete points in time. That is, the module 532 determines locations of arrivals, along the coarse driveline 1403, at different time points. For example, at time t (e.g., in one second), the AV 1402 is predicted to be at a location 1406; at time t+1 (e.g., in two seconds), the AV 1402 is predicted to be at a location 1408; and at time t+2 (e.g., in three seconds), the AV 1402 is predicted to be at a location 1410”); obtain the current position of the object(Ostafew: Fig. 14 Element 1403; Para 213 “In the example 1400 of FIG. 14, an AV 1402 is moving along a coarse driveline 1403. No static objects are found. Accordingly, a left boundary 1417 and a right boundary 1418, which are the computed boundaries of the drivable area adjusting for static objects as described with respect to FIGS. 10-12, coincide with the boundaries of the drivable area”; Para 215 “The module 532 can determine (e.g., predict) the locations of the AV 1402 at different discrete points in time. That is, the module 532 determines locations of arrivals, along the coarse driveline 1403, at different time points. For example, at time t (e.g., in one second), the AV 1402 is predicted to be at a location 1406; at time t+1 (e.g., in two seconds), the AV 1402 is predicted to be at a location 1408; and at time t+2 (e.g., in three seconds), the AV 1402 is predicted to be at a location 1410”); and generate the ST scene based on the one or more trajectories for the one or more agents, the target location of the object, and the current position of the object(Ostafew: Fig. 14 and 15). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 9, the combination of Haynes and Ostafew teaches The apparatus of claim 8, and Ostafew further teaches wherein to generate the ST scene, the one or more processors are configured to: compress the ST scene by removing one or more empty cells(Ostafew: Para 153 “At operation 830, the process 800 adjusts the drivable area for static objects. That is, the process 800 removes (e.g., cuts out, etc.) from the drivable area those portions of the drivable area where static objects are located. This is so because the AV is to be controlled to navigate (e.g., drive) around the static objects. A view 940 of FIG. 9 illustrates cutting out a portion of the drivable area. To avoid the static vehicle 914, the process 800 cuts out a cutout 942 of the drivable area 932. The size of the cut-out area can be determined based on an estimate of the size of the static object. The size of the cut-out area can include a clearance area so that the AV does not drive too close to the static object”). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. In regards to claim 10, the combination of Haynes and Ostafew teaches The apparatus of claim 8, and Ostafew further teaches wherein the ST scene is based on at least one environmental occlusion(Ostafew: Fig. 3 Element 320; Para 88 “The situation 320 is another situation where the AV 302 detects another static object. The detected static object is a pothole 322. The AV 302 can plan a trajectory 324 such that the AV 302 drives over the pothole 322 in a way that none of the tires of the AV 302 drive into the pothole 322” ). The Examiner supplies the same rationale for the combination of references Haynes and Ostafew as in Claim 2 above. As per claim 14, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 2 and therefore is rejected on the same basis. As per claim 15, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 3 and therefore is rejected on the same basis. As per claim 16, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 4 and therefore is rejected on the same basis. As per claim 17, it recites A method for performing trajectory planning for an object having limitations similar to those of claim 8 and therefore is rejected on the same basis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WENYUAN YANG whose telephone number is (571)272-5455. The examiner can normally be reached Monday - Thursday 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hitesh Patel can be reached at (571) 270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.Y./Examiner, Art Unit 3667 /Hitesh Patel/Supervisory Patent Examiner, Art Unit 3667 1/26/26
Read full office action

Prosecution Timeline

Jul 23, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600239
DRIVE APPARATUS AND ELECTRIC VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12592106
Systems and Methods for Vehicle Tuning and Calibration
2y 5m to grant Granted Mar 31, 2026
Patent 12576728
METHOD TO CONTROL AN ELECTRIC DRIVE VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12570157
VEHICLE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12548382
METHOD AND COMPUTER PROGRAM FOR RECEIVING, MANAGING AND OUTPUTTING USER-RELATED DATA FILES OF DIFFERENT DATA TYPES ON A USER-ITERFACE OF A DEVICE AND A DEVICE FOR STORAGE AND OPERATION OF THE COMPUTER PROGRAM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
85%
With Interview (+17.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 133 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month