Prosecution Insights
Last updated: April 19, 2026
Application No. 18/141,723

APPARATUS FOR PREDICTING A SPEED OF A VEHICLE AND A METHOD THEREOF

Final Rejection §101§103
Filed
May 01, 2023
Examiner
AWORUNSE, OLUWABUSAYO ADEBANJO
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
44 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
23.5%
-16.5% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1–18 are rejected under 35 U.S.C. § 101 as being directed to a judicial exception (an abstract idea) without reciting additional elements that amount to significantly more than the exception itself. Step 1: Statutory Category Independent claim 1 is directed to an apparatus (machine) and independent claim 11 is directed to a method (process). Each thus falls within a statutory category. See 35 U.S.C. § 101; MPEP § 2106. Step 2A, Prong One: The Claims Recite an Abstract Idea When viewed as a whole, the claims recite steps and structures for collecting data and performing mathematical analysis to predict a future value (vehicle speed): receiving and using past driving information and distance information; extracting features, generating queries for target time points, and determining forward information; and predicting speed at each target time point based on those relationships. These operations recite mathematical relationships/calculations and data analysis for prediction, which fall within the abstract-idea groupings identified in MPEP § 2106.04(a)(2) (e.g., mathematical concepts; certain mental processes). Claims that are directed to processing and analyzing information using mathematical techniques to forecast outcomes have been found abstract. See, e.g., SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161 (Fed. Cir. 2018) (data modeling/forecasting ineligible); Electric Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016) (collecting/analyzing/displaying information ineligible). Accordingly, independent claims 1 and 11, and claims 2–10 and 12–18 which depend therefrom, recite a judicial exception. Step 2A, Prong Two: The Claims Do Not Integrate the Exception Into a Practical Application The additional elements in the claims do not integrate the abstract idea into a practical application. MPEP § 2106.04(d); § 2106.05(a)–(c), (e)–(h). Generic computer elements: Claim 1 recites “storage” and a “controller” that perform their ordinary information-processing functions (storing, receiving, processing, outputting data). Such generic computing elements implementing an abstract idea on a computer do not integrate the exception into a practical application. Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, 223–24 (2014); Customedia Techs., LLC v. Dish Network, 951 F.3d 1359, 1365–66 (Fed. Cir. 2020). Field-of-use/data-source limitation: The claims specify that “the distance to the vehicle is provided by an autonomous driving system while the vehicle is traveling.” This merely identifies a source and context for the data used in the abstract calculations and constitutes data gathering and a field-of-use restriction. Limiting the abstract processing to a particular environment (autonomous driving) or dataset (ADS-provided distance) does not meaningfully limit the claim. Alice, 573 U.S. at 223; Electric Power, 830 F.3d at 1355–56; Affinity Labs v. Amazon, 838 F.3d 1266, 1269–71 (Fed. Cir. 2016). No improvement to computer or other technology: The claims do not recite any specific improvement to the functioning of the computer, controller, or ADS. In contrast to eligible claims that improve a computer’s operation or provide a specific technological solution (e.g., Enfish, LLC v. Microsoft, 822 F.3d 1327 (Fed. Cir. 2016); McRO, Inc. v. Bandai, 837 F.3d 1299 (Fed. Cir. 2016); DDR Holdings, LLC v. Hotels.com, 773 F.3d 1245 (Fed. Cir. 2014)), the instant claims stop at predicting a value. They do not recite controlling any actuator (e.g., applying brake pressure, adjusting torque, or commanding steering) or otherwise changing ADS operation based on the prediction. Absent such active control or other technical improvement, the recited use remains an abstract analytical result, not a practical application. No transformation; insignificant extra-solution activity: The claims do not effect a transformation of an article to a different state or thing. See MPEP § 2106.05(c). The recited obtaining of data (including distance) and outputting of a predicted speed are insignificant extra-solution activity. MPEP § 2106.05(g); Electric Power, 830 F.3d at 1355. Because none of the additional elements applies, relies on, or uses the abstract idea in a manner that imposes a meaningful limit on the claim scope, the claims are directed to the abstract idea and fail Step 2A, Prong Two. Step 2B: No “Inventive Concept” The claim elements, individually and in combination, do not amount to “significantly more” than the abstract idea. Alice, 573 U.S. at 217–18; MPEP § 2106.05(d), (f), (g). The recited storage and controller are generic computing components performing ordinary functions (storing, processing, and outputting data). Implementing the abstract idea on such generic components does not supply an inventive concept. Alice, 573 U.S. at 223–24. The limitations directing that distance be provided by an ADS and that the speed be predicted “dynamically” are conventional data-gathering and timing characteristics of computer-implemented analytics in this field and, even when considered as an ordered combination, merely confine the abstract processing to a particular technological environment without improving it. See Electric Power, 830 F.3d at 1355–56; BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290–91 (Fed. Cir. 2018). The dependent claims reciting convolutional neural networks, positional encoding, attention models, and a second CNN (e.g., claims 5–9, 16–17) merely specify particular mathematical techniques for carrying out the abstract data analysis and prediction. Adding further abstract mathematical tools, without a specific claimed improvement to computer functionality or to another technology, does not provide an inventive concept. See SAP, 898 F.3d at 1167–70; Electric Power, 830 F.3d at 1354–56. Even taken together, these limitations do not change the character of the claim from abstract analysis to a technological improvement, because no downstream control or other concrete technological effect is recited. Considering the elements as an ordered combination likewise does not add significantly more. The claimed sequence—extract features → generate queries → determine forward information (including distance) → predict speed—is a generic data-analytics workflow implemented on conventional computing components and limited to a particular environment (ADS-sourced data). See Alice, 573 U.S. at 223–24; Electric Power, 830 F.3d at 1354–56. Examiner Conclusion Because claims 1–18 are directed to an abstract idea (mathematical relationships/calculations for prediction and certain mental processes) and do not recite additional elements that integrate the exception into a practical application or amount to significantly more than the exception itself, claims 1–18 are ineligible under 35 U.S.C. § 101. Claim-Specific Notes Claim 1 (apparatus): Recites generic storage and a controller configured to perform abstract data analysis/prediction, with distance merely specified as being provided by an ADS while traveling; no control of vehicle actuators or improvement to ADS is recited. Claim 2: Further recites computing a speed-change amount and summing with current speed. This is an additional mathematical post-processing step (Δv accumulation) that does not integrate the exception or supply an inventive concept. Claim 3 / Claim 13: Listing specific types of input data (distance to a front vehicle, relative speed, steering angle, APS/BPS, etc.) constitutes data-source specification and does not integrate the abstract idea or add significantly more. Claim 4 / Claim 14: Listing roadway/environmental inputs (traffic lights, crosswalks, speed bumps, speed cameras) likewise specifies data sources and context only. Claim 5 / Claim 15: Reciting a CNN for feature extraction recites a mathematical tool for the abstract analysis; no improvement to computer functionality or transformation to a practical application is claimed. Claim 6 / Claim 16: Reciting positional encoding to generate queries for each target time point is a mathematical technique within the abstract processing; it does not integrate the exception or add significantly more. Claim 7: Performing positional encoding on forward information to index it by distance is likewise a mathematical organization of data within the abstract idea. Claim 8: Feeding the forward information and distance into an attention model to compute attention values continues the abstract mathematical analysis. Claim 9 / Claim 17: Using a second CNN to determine per-time-point forward information from attention values remains a mathematical implementation of the abstract idea. Claim 10 / Claim 18: Selecting the forward information with greatest influence (via weights/attention) identifies the most relevant features mathematically; no downstream technological action or improvement is recited. Claim 11 (method): Mirrors claim 1 and is ineligible for the same reasons; specifying that the query corresponds to distance and each target time point is a further mathematical characterization of the inputs and does not integrate the exception or add significantly more. Applicant Guidance (Not a Requirement) If applicant wishes to pursue eligibility, amendments that recite a concrete control action that changes vehicle operation (e.g., applying a particular brake-pressure or torque profile based on the predicted per-time-point speed subject to specified safety constraints) or that claim a specific improvement to the computing/ML technology itself (e.g., a defined architecture/training regimen providing a technological improvement in latency/robustness on ADS sensor data) may be considered under Step 2A, Prong Two as an improvement to another technology or, alternatively, under Step 2B as supplying an inventive concept, provided such subject matter is supported by the specification. See MPEP § 2106.05(a), (e). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 – 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kristinsson et al. (US 20150344036 A1), herein after will be referred to as Kristinsson, in view of Bansal et al. (US 20210078594 A1), herein after will be referred to as Bansal, and further in view of Palanisamy et al. (US 10940863 B2), herein after will be referred to as Palanisamy. Regarding Claim 1, Kristinsson discloses An apparatus (see at least Abstract: A vehicle includes a powertrain...and a controller programmed to operate the powertrain according to a predicted vehicle speed profile) for predicting a speed (see at least Fig. 2: Predicted Speed, 226) of a vehicle (see at least Abstract: A vehicle includes a powertrain), the apparatus (see at least Abstract: A vehicle includes a powertrain... and a controller programmed to operate the powertrain according to a predicted vehicle speed profile) comprising: storage (see at least Fig. 2, [0032]: Historical Driving Data 210 block …; presence of a dedicated Historical Driving Data block implies a physical memory resource that holds that dataset) configured to store past driving information (see at least [0032]: Historical driving data is also input at step 210 and is reflective of driving behavior of the particular driver) of the vehicle (see at least Abstract: A vehicle includes a powertrain); and a controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) configured to: extract feature information (see at least Fig. 8A: Neural network is assigned to generate predicted speed profiles based on data corresponding to historical driving patterns) about a current state (see at least Fig. 3: Current speed and acceleration values are sampled… to form speed profiles 306 and acceleration profiles 308.; Sampling instantaneous speed/acceleration supplies real-time dynamic data, i.e., vehicle’s current state) of the vehicle (see at least Abstract: A vehicle includes a powertrain) from the past driving information (see at least [0032]: Historical driving data is also input at step 210 and is reflective of driving behavior of the particular driver) of the vehicle (see at least Abstract: A vehicle includes a powertrain), determine forward information (see at least Fig. 2: Map Geometry, Road Curvature, Traffic Pattern, Speed Limits,  204-208; These map-derived inputs constitute forward-looking environmental information the controller must determine before prediction), and predict the speed of the vehicle (see at least Fig. 2: Predicted Speed, 226). Kristinsson does not explicitly disclose a controller configured to: generate a query corresponding to each target time point based on the feature information, determine forward information corresponding to a distance to the vehicle and each target time point by using each query, wherein the distance to the vehicle is provided by an autonomous driving system while the vehicle is traveling, and dynamically predict the speed of the vehicle at each target time point based on the query corresponding to each target time point and the forward information corresponding to the distance to the vehicle and each target time point. However, Bansal, in the same field of endeavor, discloses a controller configured to: generate a query (see at least Fig. 3, [0052]: generate a respective attention weight for each…(step 306)…generate bottlenecked representation…(step 310); The Examiner interprets an attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. A PHOSITA would recognize that Bansal necessarily forms query vectors internally and that its overall mechanism performs the same function that the claim recite) corresponding to each target time point (see at least Fig. 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are spatial way-points; A PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) based on the feature information (see at least Fig. 3: Step 304→306 pipeline uses encoder-generated feature data to compute queries and weights; Because attention weights arise from encoder features, they are inherently based on the feature information), determine forward information corresponding to each target time point (see at least Fig. 4: heat map, trajectory output enumerates attended features per future time; Attention weights select relevant forward context for every distinct future moment, giving one-to-one correspondence) by using each query (see at least [0055]: The system multiplies each feature representation by its attention weight; Processing pipeline shows query-derived weights driving selection of forward features—exactly “using each query), and dynamically predict the speed of the vehicle at each target time point (see at least [0058]]: “Each future agent pose … corresponding to a different time along the future trajectory”: Rationale: A PHOSITA would recognize that Kristinsson’s ongoing speed pattern recognition establishes predictive updating based on deviations over time, while Bansal’s explicit generation of agent poses at multiple future times provides discrete, temporally indexed predictions. Combining these teachings predictably yields a system that not only updates speed predictions dynamically during travel but also outputs speed at successive future time points. This integration is routine in vehicle control: translating predicted positions at specific future times into speed values (Δs/Δt). Therefore, Kristinsson and Bansal clearly disclose and render obvious the limitation of dynamically predicting the speed of the vehicle) based on the query (see at least Fig. 5: The motion network uses the bottlenecked representation (output of the attention weights) to generate the trajectory; Shows final prediction conditioned on attention weight-driven representation, fulfilling “based on the query”) corresponding to each target time point (see at least Fig. 4: heat map, trajectory output enumerates attended features per future time; Attention weights select relevant forward context for every distinct future moment, giving one-to-one correspondence; Provides explicit temporal alignment between query-conditioned computation and individual future horizons) and the forward information corresponding to each target time point (see at least Fig. 3: Step 312: Generate planned driving trajectory; The process “Generate planned driving trajectory” iterates over “multiple future times, a point that corresponds to a location in the environment. Ensures query-based and forward-information-based elements align with every discrete future timestep, satisfying the final sub-element; Kristinsson concedes its NN output “often … does not have a meaningful analytic expression”, highlighting opacity; Bansal reports “interpretable heat-maps … with no loss of accuracy”. Thus, combining them predictably yields transparent speed prediction. A PHOSITA would find it obvious to replace Kristinsson’s NN with Bansal’s encoder-attention bottleneck, thereby spotlighting critical scene features, reducing inference load, yielding sparse, inspectable attention maps, and preserving—often improving—prediction accuracy using routine network-modification practice skills). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson and Bansal before them, to modify Kristinsson’s vehicle speed profile prediction system by incorporating Bansal’s encoder–attention bottleneck trajectory planning, in order to enhance transparency, provide interpretable heat-maps, and generate temporally indexed predictions at successive future time points, thereby achieving predictable improvements in speed prediction accuracy and interpretability using routine neural network modification techniques that a PHOSITA would have found straightforward. However, Kristinsson and Bansal do not explicitly disclose the forward information corresponding to a distance to the vehicle, wherein the distance to the vehicle is provided by an autonomous driving system while the vehicle is traveling; dynamically predict the speed of the vehicle based on the forward information corresponding to the distance to the vehicle. Palanisamy further discloses the forward information corresponding to a distance to the vehicle (see at least Col. 27, ll. 43: “Here x is the distance to the front vehicle in the same lane”: Rationale: Kristinsson outputs speed versus distance curves, while Palanisamy explicitly models vehicle-to-vehicle distance, together covering forward distance information), wherein the distance to the vehicle is provided by an autonomous driving system while the vehicle is traveling (see at least Col. 9, ll. 8-14 and Col. 8, ll. 29: “In various embodiments, the vehicle 10 is an autonomous vehicle and an autonomous driving system (ADS) is incorporated”…“The sensor system 28 includes … radars, lidars, optical cameras …”: Rationale: A PHOSITA would understand that Palanisamy’s autonomous driving system (ADS), which incorporates sensors such as radars, lidars, and optical cameras, inherently generates real-time distance measurements to surrounding objects and vehicles during operation. Since ADS architectures rely on continuous sensor fusion while the vehicle is traveling, these systems necessarily provide accurate distance information to the controller as part of standard autonomous navigation and collision-avoidance functions. Therefore, Palanisamy explicitly discloses that an ADS supplies the claimed “distance to the vehicle” in real time, meeting the limitation of distance provided by an autonomous driving system while the vehicle is traveling); dynamically predict the speed of the vehicle based on the forward information corresponding to the distance to the vehicle (see at least Col. 27, ll. 37-50: “The variable v represents speed of the vehicle. The vehicle is encouraged to have larger speed, but not to exceed 35 meters/second…Here x is the distance to the front vehicle in the same lane”: Rationale: Palanisamy clearly ties speed (v) to distance (x) in the same lane, with reinforcement signals encouraging overtaking when distance is small and regulating speed accordingly. This represents dynamic adjustment of vehicle speed predictions/decisions based on forward distance information. Palanisamy does disclose dynamically predicting (or at a minimum dynamically adjusting/controlling) the speed of the vehicle based on forward information corresponding to the distance to the vehicle. A PHOSITA would readily interpret Palanisamy’s reward-driven DRL system as dynamically predicting/controlling speed using real-time distance data). A PHOSITA would recognize that Kristinsson provides the speed prediction framework using past and forward data; Bansal contributes query-conditioned, per-time-point predictions via attention; and Palanisamy supplies real-time distance from ADS sensors. Combining these known elements predictably integrates ADS distance sensing with query-based prediction to enhance accuracy, temporal precision, and interpretability, an obvious application of established techniques under KSR v. Teleflex and In re Kubin. Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to modify Kristinsson’s vehicle speed prediction system by incorporating Bansal’s encoder–attention bottleneck trajectory planning and Palanisamy’s ADS-based distance sensing, in order to dynamically generate temporally indexed speed predictions that also account for real-time vehicle-to-vehicle distance data provided by an autonomous driving system, thereby achieving predictable improvements in accuracy, interpretability, and responsiveness using routine integration of well-established ADS sensor fusion and neural network control techniques that a PHOSITA would have found straightforward. Regarding Claim 2, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) is configured to: predict a speed change amount of the vehicle (see at least Fig. 3: acceleration profiles 308; The acceleration traces in Fig. 3 are the instantaneous delta speed (dv/dt) the network outputs for every location, i.e., the required speed change amount. Profiles 306/308 are measured and predicted for the subject vehicle. Predicting speed changes (e.g., acceleration * time) is a standard approach in vehicle dynamics and time series forecasting. A PHOSITA would recognize that neural networks can be trained to output speed changes instead of absolute speeds, a predictable variation) and the forward information (see at least Fig. 2: Map Geometry, Road Curvature, Traffic Pattern, Speed Limits,  204-208; These map-derived inputs constitute forward-looking environmental information the controller must determine before prediction), and predict the speed of the vehicle (see at least Fig. 2: Predicted Speed, 226) by adding the speed change amount of the vehicle to a current speed of the vehicle (see at least Fig. 3: speed profiles 306 are obtained by sampling current speed then integrating acceleration profile 308 (“speed change”) across the segment; The figure and accompanying explanation show speed derived by accumulating per-sample acceleration (Δv) starting from the current speed—precisely the claimed summation. In vehicle dynamics, future speed is often calculated as current speed plus a change (e.g., Δv = a * Δt). A PHOSITA would recognize this as a standard method, especially in discrete time-step predictions. Future speed is inherently the current speed plus cumulative changes, a standard approach in time series forecasting and vehicle control systems). Kristinsson does not explicitly disclose the controller is configured to: predict a speed change amount of the vehicle at each target time point based on the query corresponding to each target time point and the forward information corresponding to each target time point, and predict the speed of the vehicle at each target time point by adding the speed change amount of the vehicle at each target time point to a current speed of the vehicle. However, Bansal, in the same field of endeavor, discloses the controller is configured to: predict a speed change amount of the vehicle at each target time point (see at least [0029]: planned driving trajectory 172 identifies, for each of multiple future times, a point that corresponds to a location in the environment at which the vehicle 102 should be located at the corresponding future time; Bansal discloses future-timestamped poses, while Kristinsson already predicts acceleration (Δv) for every spatial sample. A PHOSITA would naturally resample those Δv values onto Bansal’s uniform timeline or retrain the same network to output Δv at each time index. Deriving velocity by differentiating successive poses—or appending a scalar velocity head—is routine digital-signal practice. Under KSR v. Teleflex, 550 U.S. 398, and In re Kubin, 561 F.3d 1351, such predictable modifications are plainly obvious) based on the query (see at least Fig. 5: The motion network uses the bottlenecked representation (output of the attention queries) to generate the trajectory; Shows final prediction conditioned on query-driven representation, fulfilling “based on the query”) corresponding to each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; A PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) and the forward information corresponding to each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; a PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control), and predict the speed of the vehicle at each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; a PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) by adding the speed change amount of the vehicle at each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; A PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) to a current speed of the vehicle (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; A PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to modify Kristinsson’s vehicle speed prediction system by leveraging Bansal’s query-driven, time-indexed trajectory generation and Palanisamy’s ADS-based distance sensing, so that the controller not only predicts absolute speeds but also computes per-time-step speed change amounts derived from forward information and then sums those changes with the current speed to obtain future speed values. This modification represents a predictable use of known elements according to their established functions: Kristinsson already teaches acceleration (Δv) and speed profiles, Bansal provides temporally indexed query-conditioned future poses, and Palanisamy ensures real-time distance context from ADS sensors. A PHOSITA would have recognized that combining these disclosures yields improved accuracy and resolution of per-time-point speed predictions using routine neural network control and vehicle dynamics practices, without requiring inventive ingenuity. Regarding Claim 3, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the driving information (see at least [0032]: Historical driving data is also input at step 210 and is reflective of driving behavior of the particular driver) includes at least one of a distance to a front vehicle, a relative speed with the front vehicle, a speed of the vehicle, a steering angle of the vehicle, an accelerator pedal sensor (APS) value of the vehicle, or a brake pedal sensor (BPS) value of the vehicle, or any combination thereof (see at least Fig. 3: speed profiles 306 and acceleration profiles 308; The claim requires at least one of the listed parameters. Kristinsson covers "speed of the vehicle" explicitly, and steering angle, APS, and BPS implicitly through driving behavior data, as these are standard in capturing driver behavior for speed profile prediction). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to extend the driving information with common vehicle and environment signals—such as distance to a front vehicle, relative speed, speed, steering angle, APS, or BPS—because Kristinsson already uses vehicle speed/acceleration, Bansal generates steering and acceleration inputs, and Palanisamy’s ADS sensors provide distance, speed, steering, and braking data. A PHOSITA would recognize these as routine, high-value features for improving predictive accuracy in vehicle control systems. Regarding Claim 4, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the forward information (see at least Fig. 2: Map Geometry, Road Curvature, Traffic Pattern, Speed Limits,  204-208; These map-derived inputs constitute forward-looking environmental information the controller must determine before prediction) includes at least one of information about on a road on which the vehicle is traveling, traffic light information on the road, crosswalk information on the road, speed bump information on the road, or speed camera information on the road, or any combination thereof (see at least [0044], FIG.8D:…neural network 556 corresponding to a traffic light area… to…traffic light areas, including the number of lanes 502, the traffic pattern 504, and the speed limit 506..; The limitation is met by any single listed parameter. Kristinsson expressly provides both road information and traffic-light data, so either disclosure alone satisfies the claim). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to incorporate specific forward-looking road environment features—such as road geometry, traffic lights, crosswalks, speed bumps, or speed cameras—into the prediction process, because Kristinsson already discloses map geometry, road curvature, traffic patterns, speed limits, and traffic light data, while Bansal and Palanisamy reinforce the use of contextual environmental features for trajectory planning and ADS operation. A PHOSITA would recognize that integrating any of these commonly sensed roadway parameters is a routine and predictable extension that improves safety and accuracy of vehicle speed prediction without requiring inventive skill. Regarding Claim 5, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) is configured to extract the feature information (see at least Fig 8A: Neural network is assigned to generate predicted speed profiles based on data corresponding to historical driving patterns) about the current state of the vehicle (see at least Fig. 3: Current speed and acceleration values are sampled… to form speed profiles 306 and acceleration profiles 308.; Sampling instantaneous speed/acceleration supplies real-time dynamic data, i.e., vehicle’s current state) from the past driving information of the vehicle (see at least [0032]: Historical driving data is also input at step 210 and is reflective of driving behavior of the particular driver) based on neural network (see at least Fig 8A: Neural network is assigned to generate predicted speed profiles based on data corresponding to historical driving patterns). Kristinsson does not explicitly disclose the controller is configured to extract the feature information about the current state of the vehicle from the past driving information of the vehicle based on a first convolutional neural network (CNN). However, Palanisamy, in the same field of endeavor, further discloses the controller is configured to extract the feature information about the current state of the vehicle from the past driving information of the vehicle based on a first convolutional neural network (CNN) (see at least Fig. 7B: Feature Extraction CNN; Kristinsson’s NN extract feature vectors from historical driving data and current speed/acceleration. Substituting a CNN for those fully connected layers would have been an obvious and straightforward substitution under KSR (predictable improvement, no incompatibility)). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to modify Kristinsson’s feature extraction neural network by implementing Palanisamy’s convolutional neural network architecture, in order to extract feature information about the current state of the vehicle from past driving information. Kristinsson already teaches use of neural networks for generating speed profiles from historical and current driving data, while Palanisamy explicitly shows CNNs performing feature extraction from vehicle inputs. A PHOSITA would have recognized that replacing or augmenting fully connected neural network layers with CNNs represents a straightforward substitution of one known feature extraction architecture for another, yielding predictable improvements in representational efficiency and accuracy without requiring inventive ingenuity, fully consistent with KSR’s rationale for obvious design choices. Regarding Claim 6, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) is configured to generate feature information (see at least Figs 8A-8D: Each neural network is assigned to generate predicted speed profiles based on data corresponding to historical driving patterns) about the current state (see at least Fig. 3: Current speed and acceleration values are sampled… to form speed profiles 306 and acceleration profiles 308.; Sampling instantaneous speed/acceleration supplies real-time dynamic data, i.e., vehicle’s current state) of the vehicle (see at least Abstract: A vehicle includes a powertrain). Kristinsson does not explicitly disclose wherein the controller is configured to generate the query corresponding to each target time point by performing positional encoding on the feature information about the current state of the vehicle. However, Bansal, in the same field of endeavor, discloses wherein the controller is configured to generate the query (see at least Fig. 3, [0052]: generate a respective attention weight for each…(step 306)…generate bottlenecked representation…(step 310)) corresponding to each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; a PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) by performing positional encoding on the feature information about the current state of the vehicle (see at least Fig. 5, [0066…applies a positional encoding to each feature representation…; Positional encoding is a standard Transformer practice, offering temporal indexing without recurrent layers, making substitution predictable). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to modify Kristinsson’s feature extraction and query generation by incorporating Bansal’s positional encoding technique, so that queries corresponding to each target time point are generated from temporally indexed feature information about the vehicle’s current state. Kristinsson already discloses neural networks extracting and processing features from historical and current vehicle data, while Bansal expressly applies positional encoding to feature representations to provide temporal indices for trajectory prediction. A PHOSITA would have recognized that substituting positional encoding for simple waypoint indexing is a routine and predictable enhancement, improving temporal alignment of queries without altering core functionality, fully consistent with KSR’s principle that applying known techniques to similar problems yields obvious results. Regarding Claim 7, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 1. Kristinsson further discloses wherein the controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) is configured to obtain the forward information (see at least Fig. 2: Map Geometry, Road Curvature, Traffic Pattern, Speed Limits,  204-208; These map-derived inputs constitute forward-looking environmental information the controller must determine before prediction) of the vehicle (see at least Abstract: A vehicle includes a powertrain). Kristinsson does not explicitly disclose the controller is configured to perform positional encoding on the forward information of the vehicle to generate the forward information according to the distance to the vehicle. However, Bansal, in the same field of endeavor, discloses the controller is configured to perform positional encoding on the forward information of the vehicle (see at least Fig. 5, [0066…applies a positional encoding to each feature representation…; Positional encoding is a standard Transformer practice, offering temporal indexing without recurrent layers, making substitution predictable) to generate the forward information according to the distance to the vehicle (see at least Fig. 2: Input data (I)a roadmap… speed limits, traffic lights, and dynamic object, and Fig. 4: heat map, trajectory output enumerates attended features per future time; Bansal’s Transformer-based attention uses distances and positions of surrounding objects to weight feature representations. A PHOSITA would integrate this positional encoding and distance-based processing into Kristinsson’s speed-prediction network to predictably enhance spatial-temporal precision, without altering its core functionality, and align with industry best practices). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to modify Kristinsson’s use of forward information (map geometry, curvature, traffic patterns, speed limits) by incorporating Bansal’s positional encoding of feature representations, so that the forward information is generated and indexed according to the distance to the vehicle. Kristinsson already requires forward-looking road data to drive speed predictions, while Bansal teaches the application of positional encodings and distance-based feature weighting in a Transformer network to temporally and spatially align predictive inputs. A PHOSITA would have recognized that combining these approaches predictably enhances the spatial-temporal precision of forward information in vehicle speed prediction networks, using a routine and well-established machine learning technique, without requiring inventive skill. Regarding Claim 8, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 7. Kristinsson does not explicitly disclose wherein the controller is configured to input the forward information according to each query and the distance to the vehicle to an attention model and determine an attention value for each forward information at each target time point based on the attention model. However, Bansal, in the same field of endeavor, discloses wherein the controller is configured to input the forward information according to each query (see at least [0055]: The system multiplies each feature representation by its attention weight; Processing pipeline shows query-derived weights driving selection of forward features—exactly “using each query) and the distance to the vehicle (see at least Fig. 2: Input data (I)a roadmap… speed limits, traffic lights, and dynamic object, and Fig. 4: heat map, trajectory output enumerates attended features per future time; Bansal’s Transformer-based attention uses distances and positions of surrounding objects to weight feature representations. A PHOSITA would integrate this positional encoding and distance-based processing into Kristinsson’s speed-prediction network to predictably enhance spatial-temporal precision, without altering its core functionality, and align with industry best practices) to an attention model (see at least Fig. 3, [0052]: generate a respective attention weight for each…(step 306)…generate bottlenecked representation…(step 310)) and determine an attention value for each forward information at each target time point based on the attention model (see at least Fig. 4: heat map, trajectory output enumerates attended features per future time; Bansal’s Transformer-based attention uses distances and positions of surrounding objects to weight feature representations. The attention mechanism assigns weights (attention values) to feature representations for each future time point, satisfying time-point-specific attention values based on the model. A PHOSITA would readily integrate Kristinsson’s NN-based speed predictor with Bansal’s Transformer-style attentional bottleneck, applying positional encoding to weight road- and traffic-derived inputs at each temporal horizon. This substitution is a textbook engineering choice yielding transparent, per-timepoint speed forecasts, rendering the claimed invention obvious under KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (use of known elements “according to their established functions” is obvious) and In re Kubin, 561 F.3d 1351, 1360 (Fed. Cir. 2009) (combination of familiar elements yields predictable results). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Kristinsson, Bansal, and Palanisamy before them, to extend Kristinsson’s speed prediction framework by integrating Bansal’s Transformer-style attention mechanism, such that forward information—encoded according to each query and the vehicle’s distance—is input into an attention model that assigns attention values for each target time point. Kristinsson already teaches forward-looking road and traffic inputs for speed prediction, while Bansal discloses the application of positional encodings and query-based weighting of features with explicit time-step granularity. A PHOSITA would have recognized that substituting Bansal’s attention module into Kristinsson’s NN speed predictor represents a predictable use of known elements according to their established functions, yielding interpretable, per-timepoint attention values for road-context features and producing transparent speed forecasts, consistent with KSR and Kubin. Regarding Claim 9, Kristinsson, Bansal, and Palanisamy disclose all the limitations of Claim 7. Kristinsson further discloses wherein the controller (see at least Abstract: A controller programmed to operate the powertrain according to a predicted vehicle speed profile) is configured to determine the forward information (see at least Fig. 2: Map Geometry, Road Curvature, Traffic Pattern, Speed Limits,  204-208; These map-derived inputs constitute forward-looking environmental information the controller must determine before prediction) based on neural network (see at least Fig. 2: Area NN). Kristinsson does not explicitly disclose wherein the controller is configured to determine the forward information corresponding to each target time point from the attention value for each forward information at each target time point based on a second convolutional neural network (CNN). However, Bansal, in the same field of endeavor, discloses the controller is configured to determine the forward information corresponding to each target time point (see at least Fig 3: Step 312: Generate planned driving trajectory; Bansal outputs query-conditioned poses at each future timestep, providing explicit temporal indices. Kristinsson’s i = 1…n are merely spatial way-points; A PHOSITA would map them to fixed-period timesteps, a routine distance-to-time conversion in automotive control) from the attention value for each forward information at each target time point (see at least Fig. 4: heat map, trajectory output enumerates attended features per future time; Bansal’s Transformer-based attention uses distances and positions of surrounding objects to weight feature representations. The attention mechanism assigns weights (attention values) to feature representations for each future time point, satisfying time-point-specific attention values based on the model. A PHOSITA would readily integrate
Read full office action

Prosecution Timeline

May 01, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §101, §103
Aug 20, 2025
Response Filed
Sep 15, 2025
Final Rejection — §101, §103
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month