Prosecution Insights
Last updated: April 19, 2026
Application No. 17/188,251

Method of Modeling Human Driving Behavior to Train Neural Network Based Motion Controllers

Non-Final OA §101§103
Filed
Mar 01, 2021
Examiner
BRAHMACHARI, MANDRITA
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Steering Solutions Ip Holding Corporation
OA Round
5 (Non-Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
311 granted / 407 resolved
+21.4% vs TC avg
Strong +30% interview lift
Without
With
+29.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 407 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission has been entered. The action is in response to claims dated 9/17/2025 Claims pending in the case: 1,3-4,6-11, 13-21 Withdrawn claim: 5 in response to restriction requirement on 5/16/2024 Cancelled claims: 2, 5, 12 Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “using … and one or more modules or computing devices, determining…” in claims 4. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. “one or more modules or computing devices” in claims 4 are interpreted as hardware and software components within a vehicle control system as explained by applicant in remarks dated 9/17/2025. All claims dependent on the claims identified above are also interpreted under 35 U.S.C. 112(f) due to the virtue of their respective direct and indirect dependencies. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1,3-4,6-11, 13-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Independent Claim 1 includes the following recitation of an abstract idea: determining,…, a first current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (This evaluation process is naturally performed by the human brain while driving and is a recitation of a mental process); and making a first determination comprising determining a first path geometry ahead of the vehicle including a first X direction and a first Y direction, a first lateral deviation of the vehicle from intended path, a first heading deviation of the vehicle from a first intended path, a first curvature of a first future trajectory, and a first target velocity (This determination to control a vehicle during driving is naturally performed by the human brain while driving and is a recitation of a mental process),… determining,…, a second current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (This evaluation process is naturally performed by the human brain while driving and is a recitation of a mental process); and making a second determination comprising determining a second path geometry ahead of the vehicle including a second X direction and a second Y direction, a second lateral deviation of the vehicle from intended path, a second heading deviation of the vehicle from a second intended path, a second curvature of a second future trajectory, and a second target velocity (This determination to control a vehicle during driving is naturally performed by the human brain while driving and is a recitation of a mental process),… Claim 1 recites the following additional elements, which, considered individually and as an ordered combination do not integrate the abstract idea into a practical application: driving, by a human driver, a vehicle on a test track according to a first speed for a first driving characteristic, wherein the human driver turns the vehicle at a first turning rate, accelerates at a first acceleration rate, and decelerates at a first deceleration rate (This is mere data gathering and is therefore insignificant extra-solution activity, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(g). Moreover, data gathering is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data and iv. Storing and retrieving information and MPEP 2106.05(g), example ii. Testing a system for a response, the response being used to determine system malfunction, In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982); and example iii. Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price, OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93;), determining, using a plurality of sensors and one or more modules or computing devices (This is a recitation of generic computer components to be used in performing the abstract idea, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).); producing a first input data from the first determination, and communicating the first input data to a neural network to model human driving behavior and producing a first output data from the neural network (This is insignificant extra-solution activity, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(g). Moreover, sending, receiving, storing and retrieving information is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data and iv. Storing and retrieving information and MPEP 2106.05(g), example iv. Obtaining information about transactions using the Internet to verify credit card transactions. Moreover this high level recitation of the neural network is a mere instruction to apply the judicial exception. It only appears to amount to the use of a generically recited, off the shelf component, as a tool to implement the process and is not an inventive concept. Since the model is used merely as a tool to implement an existing process, this does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).) , and communicating the first output data to an autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a first period of time based only on the first output data (This additional element is mere instructions to apply an exception because they recite no more than an idea of a solution or outcome. The limitation adds no specifics beyond using a model output to drive a vehicle. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f)); driving, by a human driver, a vehicle on a test track according to a second speed for a second driving characteristic, wherein the human driver turns the vehicle at a second turning rate, accelerates at a second acceleration rate, and decelerates at a second deceleration rate (This is mere data gathering and is therefore insignificant extra-solution activity, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(g). Moreover, data gathering is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data and iv. Storing and retrieving information and MPEP 2106.05(g), example ii. Testing a system for a response, the response being used to determine system malfunction, In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982); and example iii. Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price, OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93;), producing a second input data from the first determination, and communicating the second input data to a neural network to model human driving behavior and producing a second output data from the neural network (This is sending and receiving data to and from a neural network model. This is insignificant extra-solution activity, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(g). Moreover, sending, receiving, storing and retrieving information is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data and iv. Storing and retrieving information and MPEP 2106.05(g), example iv. Obtaining information about transactions using the Internet to verify credit card transactions. Moreover this high level recitation of the neural network is a mere instruction to apply the judicial exception. It only appears to amount to the use of a generically recited, off the shelf component, as a tool to implement the process and is not an inventive concept. Since the model is used merely as a tool to implement an existing process, this does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).) , and communicating the second output data to an autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a second period of time based only on the second output data (This additional element is mere instructions to apply an exception because they recite no more than an idea of a solution or outcome. The limitation adds no specifics beyond using a model output to drive a vehicle. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f)). These claimed limitations therefore do not integrate the abstract idea into a practical application. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In this case, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons given above with respect to integration of the abstract idea into a practical application. Therefore the claim is not patent eligible. Claims 3-4, 13 and 21 are similar in scope as claim 1 and therefore rejected under the same rationale. The additional elements recited in claim 4 and 13 describing the speed and nature of the turn do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. Further limitations of determining “a coefficient #1,a coefficient #2,a coefficient #3, wherein coefficients #1, #2, and #3 represent the characteristic or parametric curve equation” recited in claim 4 and 13 also do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea as this is representation of the path using mathematical equation that may be done by the human mind. Hence these claims are rejected as being abstract. The remaining dependent claims recite at least the abstract idea identified above in the claim upon which it depends and recites the following additional elements which, considered individually and as an ordered combination with the additional elements from the claim upon which it depends, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. Dependent claims 9-11, 14-20 pertains to using the collected data to drive a vehicle (This additional element is mere instructions to apply an exception because they recite no more than an idea of a solution or outcome. The limitation adds no specifics other than using data to drive a vehicle. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f)) The dependent claims therefore, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea Hence these claims are rejected as being abstract. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,3-4,6-11, 13-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soliman (US 20200031371) in view of Shalev-Shwartz (US 20210142421 A1). Regarding claim 1, Soliman teaches, a method comprising driving, by a human driver, a vehicle on a test track according to a first driving characteristic, wherein the human driver turns the vehicle at a first turning rate, accelerates at a first acceleration rate, and decelerates at a first deceleration rate; (Soliman: Abstract, [4] receive data from vehicle sensors as it is being driven), determining, using a plurality of sensors and one or more modules or computing devices, a first current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-3, 603] receive data from vehicle sensors as it is being driven to determine vehicle state) ; and making a first determination comprising determining a first path geometry ahead of the vehicle including a first X direction and a first Y direction, a first lateral deviation of the vehicle from intended path, a first heading deviation of the vehicle from a first intended path, a first curvature of a first future trajectory, and a first target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path ), and producing a first input data from the first determination, and communicating the first input data to a neural network to model human driving behavior and producing a first output data from the neural network (Soliman: [56]: input data to the behavior learning algorithm; [29]: machine learning algorithm using neural network) , and communicating the first output data to an autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a first period of time based only on the first output data (Soliman: Fig. 14, [22, 49, 51, 56-57]: output for modifying one or more parameters of a propulsion system during an autonomous driving mode to adjust speed and follow a path in real time); driving, by a human driver, a vehicle on a test track according to a second driving characteristic, wherein the human driver turns the vehicle at a second turning rate less sharp than the first turning rate, accelerates at a second acceleration rate slower than the first acceleration rate, and decelerates at a second deceleration rate slower than the first acceleration rate; (Soliman: Abstract, [4, 56] receive changes in the data from vehicle sensors as it is being driven), determining, using a plurality of sensors and one or more modules or computing devices, a second current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state) ; and making a second determination comprising determining a first path geometry ahead of the vehicle including a second X direction and a second Y direction, a second lateral deviation of the vehicle from intended path, a second heading deviation of the vehicle from a second intended path, a second curvature of a second future trajectory, and a second target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path ), and producing a second input data from the second determination, and communicating the second input data to a neural network to model human driving behavior and producing a second output data from the neural network (Soliman: [56]: input data to the behavior learning model; [29]: machine learning algorithm using neural network) , and communicating the second output data to an autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a second period of time based only on the second output data (Soliman: Fig. 14, [22, 49, 51, 56-57]: model output for modifying one or more parameters of a propulsion system during an autonomous driving mode to adjust speed and follow a path in real time; [32, 46]: The model to control vehicle in autonomous mode imitates ideal driver behavior – trained using user data; [32]: “training data (e.g., supervised learning) which helps the driving behavior learning algorithm 214 identify an aggressive driver behavior 206 or a conservative driver behavior”); Although Soliman does not specifically recite, … a .. lateral deviation of the vehicle from intended path, a … heading deviation of the vehicle from a first intended path, a … curvature of a … future trajectory, Soliman in [61] teaches determining a deviation from the path (future trajectory) from “steering input/angle deviations, and time gap between accelerator/brake pedal application, driver eyes monitoring (eyes on the road), average deviation from speed limit, and vehicle following distance (for example, at different vehicle speeds)”. It is obvious that this data provides the deviation information claimed in the limitation. Nonetheless, Shalev-Shwartz teaches, determination comprising determining a path geometry ahead of the vehicle including a X direction and a Y direction, a lateral deviation of the vehicle from intended path, a heading deviation of the vehicle from a intended path, a first curvature of a future trajectory, and a target velocity (Shalev-Shwartz: [99, 413, 626]: determine deviations; [168, 179]: curvature of road; [195, 198]: target speed; [351-352]: asses level of aggression i.e. speed and trajectory change; [311-312]: data used for autonomous driving). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Soliman and Shalev-Shwartz because the systems are in the field of autonomous driving with navigation and are analogous art from the “same field of endeavor”. One of ordinary skill in the art would have been motivated to combine the teachings because the combination would use deviation information and improve the system by using an interpretable, mathematical model for safety assurance and a design of a system that adheres to safety assurance requirements while being scalable to millions of cars (See Shalev-Shwartz [4]). Regarding claim 3, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 1 above and further teach, further comprising having a human driver drive a test track according to a third driving characteristic, wherein the human driver turns the vehicle at a third turning rate less sharp than the second turning rate, accelerates at a third acceleration rate slower than the second acceleration rate, and decelerates at a third deceleration rate slower than the second acceleration rate; according to a second driving characteristic (Soliman: Abstract, [4] receive data from vehicle sensors as it is being driven; [351-352]: asses level of aggression – driving characteristic), determining, using a plurality of sensors and one or more modules or computing devices, a third current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state) (Shalev-Shwartz: [99, 176-177, 413, 626]: determine vehicle state); and making a third determination comprising determining a third path geometry ahead of the vehicle including a third X direction and a third Y direction, a third lateral deviation of the vehicle from intended path, a third heading deviation of vehicles current heading from intended path, a third curvature of a third future trajectory, and a third target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path) (Shalev-Shwartz: [99, 176, 413, 626]: determine deviations; [168, 179]: curvature of road; [195, 198]: target speed), and producing a third input data from the third determination, and communicating the third input data to the neural network to model human driving behavior and producing a third output data from the neural network (Soliman: [56]: input data to the behavior learning algorithm; [29]: machine learning algorithm using neural network) (Shalev-Shwartz: [99, 161, 170, 195, 198, 413, 626]: input to model to generate output), and communicating the third output data to the autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a period of time based on the third output data (Soliman: Fig. 14, [49, 51, 56-57]: output for modifying one or more parameters of a propulsion system during an autonomous driving mode to adjust speed and follow a path in real time) (Shalev-Shwartz: [296, 351]: modify to increase safety). Regarding claim 4, Soliman teaches, vehicle including a control module including input data derived by having a human driver drive a test track a first driving characteristic wherein the human driver turns the vehicle at a first turning rate, accelerates at a first acceleration rate, and decelerates at a first deceleration rate; according to a second driving characteristic, wherein the human driver turns the vehicle at a second turning rate less sharp than the first turning rate, accelerates at a second acceleration rate slower than the first acceleration rate, and decelerates at a second deceleration rate slower than the first acceleration rate; according to a third driving characteristic, wherein the human driver turns the vehicle at a third turning rate less sharp than the second turning rate, accelerates at a third acceleration rate slower than the second acceleration rate, and decelerates at a third deceleration rate slower than the second acceleration rate (Soliman: Abstract, [4] receive data from vehicle sensors as it is being driven to train model) and using a plurality of sensors, and one or more modules or computing devices, determining the current state of the vehicle at various points of time while driving the vehicle according to each of the first driving characteristic, the second driving characteristic, and the third driving characteristic, using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state), and making a first determining a path geometry ahead of the vehicle including a X direction and a Y direction, …, a lateral deviation of the vehicle from intended path, a heading deviation of the vehicle from an intended path a curvature of the future trajectory, and a target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path ), and Although Soliman does not specifically recite, … a first lateral deviation of the vehicle from intended path, a first heading deviation of the vehicle from a first intended path, a first curvature of a first future trajectory, Soliman in [61] teaches determining a deviation from the path (future trajectory) from “steering input/angle deviations, and time gap between accelerator/brake pedal application, driver eyes monitoring (eyes on the road), average deviation from speed limit, and vehicle following distance (for example, at different vehicle speeds)”. It is obvious that this data provides the deviation information claimed in the limitation. Soliman does not specifically teach, a first determining a path geometry ahead of the vehicle including a X direction and a Y direction, a coefficient #1, a coefficient #2, a coefficient #3, wherein coefficients #1, #2, and #3 represent the characteristic or parametric curve equation; Shalev-Shwartz teaches, determining a path geometry ahead of the vehicle including a X direction and a Y direction, a coefficient #1, a coefficient #2, a coefficient #3, wherein coefficients #1, #2, and #3 represent the characteristic or parametric curve equation (Shalev-Shwartz: [99, 168, 179, 183, 188, 413]: [168]: “using a 3rd-degree polynomial having coefficients corresponding to physical properties such as the position, slope, curvature, and curvature derivative of the detected road”; [219-221]: derive optimized trajectory as a solution to an optimization problem (uses path characteristics -coefficients)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Soliman and Shalev-Shwartz because the systems are in the field of autonomous driving with navigation and are analogous art from the “same field of endeavor”. One of ordinary skill in the art would have been motivated to combine the teachings because the combination would use path trajectory and deviation information to improve the system by using an interpretable, mathematical model for safety assurance and a design of a system that adheres to safety assurance requirements while being scalable to millions of cars (See Shalev-Shwartz [4]). Regarding claim 6, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 1 above and further teach, further comprising driving the vehicle without human input for at least a period of time based only on the first output data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the last vehicle state which may be the first output data). Regarding claim 7, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 1 above and further teach, further comprising driving the vehicle without human input for at least a period of time based on only the second output data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the last vehicle state which may be the second output data). Regarding Claim(s) 8,10 this/these claim(s) is/are similar in scope as claim(s) 7. Therefore, this/these claim(s) is/are rejected under the same rationale. Regarding claim 9, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 3 above and further teach, further comprising driving the vehicle without human input for at least a period of time based on the third output data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the last vehicle state which may be the third output data). Regarding Claim(s) 11 this/these claim(s) is/are similar in scope as claim(s) 6. Therefore, this/these claim(s) is/are rejected under the same rationale. Regarding Claim(s) 13 Soliman and Shalev-Shwartz teaches the invention as claimed in claim 12 above and further teach, further comprising having a human driver drive a test track according to a third driving characteristic, wherein the human driver turns the vehicle at a third turning rate less sharp than the second turning rate, accelerates at a third acceleration rate slower than the second acceleration rate, and decelerates at a third deceleration rate slower than the second acceleration rate; according to a second driving characteristic (Soliman: Abstract, [4] receive data from vehicle sensors as it is being driven; [351-352]: asses level of aggression – driving characteristic); determining, using a plurality of sensors and one or more modules or computing devices, a third current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state) (Shalev-Shwartz: [99, 176-177, 413, 626]: determine vehicle state); and making a third determination comprising determining a third path geometry ahead of the vehicle including a third X direction and a third Y direction, a third lateral deviation of the vehicle from intended path, a third heading deviation of vehicles current heading from intended path a third curvature of a third future trajectory, and a third target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path) (Shalev-Shwartz: [99, 176, 413, 626]: determine deviations; [168, 179]: curvature of road; [195, 198]: target speed), and producing a third input data from the third determination, and communicating the third input data to the autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a third period of time based only on the third input data (Soliman: [46, 56, 59]: input data to the behavior learning algorithm for piloted or autonomous driving; [29]: machine learning algorithm using neural network) (Shalev-Shwartz: [99, 138, 161, 170, 195, 198, 413, 626]: input to model to generate output; [138] for “autonomous driving and/or driver assist technology”). Regarding claim 14, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 12 above and further teach, further comprising driving the vehicle without human input for at least a period of time based only on the first input data and further comprising communicating the first driving characteristic to the model human driving behavior to produce the first output data from the neural network (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the input data which may be the first input data). Regarding claim 15, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 1 above and further teach, communicating the first driving characteristic to the model human driving behavior to produce the first output data from the neural network (Soliman: [46]: autonomous mode controls vehicle based on the input data of driving characteristics). Regarding claim 16, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 14 above and further teach, further comprising driving the vehicle without human input for at least a period of time based only on the second input data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the input data which may be the second input data). Regarding claim 17, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 12 above and further teach, further comprising driving the vehicle without human input for at least a period of time based only on the first input data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the input data which may be the third input data). Regarding Claim(s) 18 this/these claim(s) is/are similar in scope as claim(s) 16. Therefore, this/these claim(s) is/are rejected under the same rationale. Regarding Claim(s) 19 this/these claim(s) is/are similar in scope as claim(s) 14. Therefore, this/these claim(s) is/are rejected under the same rationale. Regarding claim 20, Soliman and Shalev-Shwartz teaches the invention as claimed in claim 13 above and further teach, further comprising driving the vehicle without human input for at least a period of time based only on the first input data, driving the vehicle without human input for at least a period of time based on the second input data, and driving the vehicle without human input for at least a period of time according based on the third input data (Soliman: [46]: driver may switch to autonomous mode – control of the vehicle is based on the input data which may be the first, second or third input data). Regarding Claim(s) 21, Soliman teaches, method comprising driving, by a human driver, a vehicle on a test track according to a first driving characteristic, wherein the human driver turns the vehicle at a first turning rate, accelerates at a first acceleration rate, and decelerates at a first deceleration rate (Soliman: Abstract, [4] receive data from vehicle sensors as it is being driven); determining, using a plurality of sensors and one or more modules or computing devices, a first current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state); and making a first determination comprising determining a first path geometry ahead of the vehicle including a first X direction and a first Y direction, a first lateral deviation of the vehicle from intended path, a first heading deviation of the vehicle from a first intended path, a first curvature of a first future trajectory, and a first target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path ), and producing a first input data from the first determination, and communicating the first input data an autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a first period of time based only on the first input data (Soliman: [56, 59]: input data to the behavior learning algorithm; [59]: “applied during piloted or autonomous driving “; [29]: machine learning algorithm using neural network; [22, 49, 51, 56-57]: output for modifying one or more parameters of a propulsion system during an autonomous driving mode to adjust speed and follow a path in real time); driving, by a human driver, the vehicle on a test track according to a second driving characteristic, wherein the human driver turns the vehicle at a second turning rate less sharp than the first turning rate, accelerates at a second acceleration rate slower than the first acceleration rate, and decelerates at a second deceleration rate slower than the first acceleration rate (Soliman: Abstract, [4] receive changes in the data from vehicle sensors as it is being driven); determining, using the plurality of sensors and one or more modules or computing Page 9 of 20 devices, a second current state of the vehicle at various points of time using at least one of yaw, velocity, lateral acceleration, longitudinal acceleration, yaw rate, speed, steering wheel angle, or steering angle target (Soliman: Abstract, [4, 32-33] receive data from vehicle sensors as it is being driven to determine vehicle state); and making a second determination comprising determining a second path geometry ahead of the vehicle including a second X direction and a second Y direction, a second lateral deviation of the vehicle from intended path, a second heading deviation of the vehicle from a second intended path, a second curvature of a second future trajectory, and a second target velocity (Soliman: [30, 35, 49-52]: received data is used to determine vehicle speed, vehicle path and user driving state (deviation) that may be used to determine an action; [61-62]: deviation from path ), and producing a second input data from the second determination, and communicating the second input data to the autonomous driving vehicle module, on board the vehicle, constructed and arranged to drive a vehicle without human input for at least a second period of time based only on the second input data (Soliman: [46, 56, 59]: input data to the behavior learning model; [29]: machine learning algorithm using neural network); Although Soliman does not specifically recite, … a .. lateral deviation of the vehicle from intended path, a … heading deviation of the vehicle from a first intended path, a … curvature of a … future trajectory, Soliman in [61] teaches determining a deviation from the path (future trajectory) from “steering input/angle deviations, and time gap between accelerator/brake pedal application, driver eyes monitoring (eyes on the road), average deviation from speed limit, and vehicle following distance (for example, at different vehicle speeds)”. It is obvious that this data provides the deviation information claimed in the limitation. Nonetheless, Shalev-Shwartz teaches, determination comprising determining a path geometry ahead of the vehicle including a X direction and a Y direction, a lateral deviation of the vehicle from intended path, a heading deviation of the vehicle from a intended path, a first curvature of a future trajectory, and a target velocity (Shalev-Shwartz: [99, 413, 626]: determine deviations; [168, 179]: curvature of road; [195, 198]: target speed; [351-352]: asses level of aggression i.e. speed and trajectory change; [311-312]: data used for autonomous driving). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Soliman and Shalev-Shwartz because the systems are in the field of autonomous driving with navigation and are analogous art from the “same field of endeavor”. One of ordinary skill in the art would have been motivated to combine the teachings because the combination would use deviation information and improve the system by using an interpretable, mathematical model for safety assurance and a design of a system that adheres to safety assurance requirements while being scalable to millions of cars (See Shalev-Shwartz [4]). Response to Arguments Applicants’ response explaining the structure to be used for the 112f interpretation was acceptable as indicated in the prior office action. The 112f interpretation is according to the support provided in applicant response. Applicants’ arguments on the 35 U.S.C. § 101 rejection have been fully considered. The rejection has been updated based on applicant’s comments to clarify the rejection and is maintained as explained above. Applicant argues that the claims are not abstract as they include a vehicle with a control module. The examiner respectfully disagrees. The limitations merely claims collecting data to use a model to control a vehicle. The limitations do not have any specifics on the training of the model other than using collected data. The model and control module are being used as tools to drive the vehicle. Since the model as claimed is no more than a tool being used, the limitations have been found to be abstract. The 101 abstract idea rejections are maintained as explained above. Applicants’ prior art arguments have been fully considered but are not persuasive. Applicant argues that the prior arts teach the functional aspect of collecting driving data and using it for autonomous driving but do not read on the limitations as claimed because ”neither Soliman nor Shalev-Shwartz et al suggest Multi-personality driving characteristics (aggressive, moderate, conservative).”. The examiner respectfully disagrees. The arts teach collecting data as the driver is driving which is used to train the model to control a vehicle. The collected data inherently has different driving personalities and it is obvious that training data includes different driving characteristics representing different driver personalities. The limitations merely claim providing input and getting output from a model and the model being used in an autonomous mode. Soliman teaches using a neural network to model human driving behavior for autonomous driving. Shalev-Shwartz further teaches a vehicle may be controlled based on analysis of human behavior which may be done using a neural network [311-312]. Applicant argues that the limitations do not recite the different periods of time as claimed. The examiner finds that the teachings in the prior art to adjust vehicle controls based on input reads on the limitations as claimed because in each period of time, the model is performing the same function of controlling the vehicle based on the input which may be a first, second or third input in a first, second or third period of time. Thus it is concluded that the combined teachings of the cited prior arts read on the limitations as claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANDRITA BRAHMACHARI whose telephone number is (571)272-9735. The examiner can normally be reached Monday to Friday, 11 am to 8 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached on 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mandrita Brahmachari/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Mar 01, 2021
Application Filed
May 16, 2024
Examiner Interview (Telephonic)
May 16, 2024
Non-Final Rejection — §101, §103
Oct 15, 2024
Response Filed
Dec 16, 2024
Final Rejection — §101, §103
Mar 19, 2025
Examiner Interview Summary
Mar 19, 2025
Applicant Interview (Telephonic)
Mar 24, 2025
Request for Continued Examination
Mar 30, 2025
Response after Non-Final Action
Jul 02, 2025
Non-Final Rejection — §101, §103
Sep 17, 2025
Response Filed
Nov 06, 2025
Final Rejection — §101, §103
Feb 10, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596746
AUDIO PREVIEWING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12596469
COMBINED DATA DISPLAY WITH HISTORIC DATA ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12591358
DAMAGE DETECTION PORTAL
2y 5m to grant Granted Mar 31, 2026
Patent 12585979
MANAGING DATA DRIFT AND OUTLIERS FOR MACHINE LEARNING MODELS TRAINED FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585992
MACHINE LEARNING WITH ATTRIBUTE FEEDBACK BASED ON EXPRESS INDICATORS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.8%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 407 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month