Prosecution Insights
Last updated: April 19, 2026
Application No. 18/221,832

METHODS AND SYSTEMS FOR LEARNING SAFE DRIVING PATHS

Final Rejection §101§102§103§DP
Filed
Jul 13, 2023
Examiner
AYAD, MARIA S
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Torc Robotics, Inc.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
53 granted / 159 resolved
-21.7% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 159 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION This action is responsive to the response filed on 6/27/2025. Claims 1-20 remain pending in this application. Claims 1-3, 5, 6, 8-11, 13, 14, and 16-19 are amended. Claims 1, 9, and 17 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/12/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1 and 2 are objected to because of the following minor informalities: Claim 1, add a semicolon instead of the comma at the end of the second limitation. Claim 2, replace …model … with …models … in the last limitation. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 9, and 17 recite “predict … a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions” and “… detect the one or more autonomous adversity conditions…” which can be recognized as one or more elements that fall into the “mental process” group of abstract ideas, but for the recitation of generic computer components, such as “at least one/one or more processors” (for claims 1, 9, and 17) and “a non-transitory computer-readable medium” (for claims 9 and 17). The above-indicated limitations recite one or more judgments based on certain observations and evaluations. Accordingly, each of these claims recites an abstract idea. This judicial exception is not integrated into a practical application because the above-indicated limitations are merely instructions to implement the abstract idea on a computer and require no more than a generic computer to perform generic computer functions. The recitation of generic computer components (“at least one/one or more processors” (for claims 1, 9, and 17) and “a non-transitory computer-readable medium” (for claims 9 and 17)) does not impose any meaningful limits on practicing the abstract idea. The additional elements of “receiving … sensor data of a vehicle that is traveling, the sensor data …” and “providing … control instructions to … to an autonomous driving system of the vehicle” amount to no more than adding insignificant extra-solution activity of mere data input/gathering or data output that does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Furthermore, the additional elements of “executing … (the) one or more trained machine learning models to …” are mere recitations of the use of a generic form of learning or processing such as any of those utilizing trained models with no specific practical application; it merely links the use of an exception to a technological environment and still does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor, and/or (non-transitory computer-readable) storage medium to perform the steps described above amounts to no more than mere instructions to apply the exception using generic computer components. The “receiving” and “providing” steps are further considered well-understood, routine, and conventional in view of the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicating that mere collection, receipt, or transmittal of data over a network is a well-understood, routine, conventional function when it is claimed in a merely generic manner. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity nor the link of the use of the abstract idea to a certain technological environment. Thus, none of the additional elements as generically claimed in the independent claims is sufficient to amount to significantly more than the judicial exception. Therefore, all of these additional limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these independent claims are not patent eligible. The dependent claims respectively recite additional limitations of detect(ing) the one or more autonomous driving adversity conditions using the vehicle sensor data and predict(ing) the trajectory to be followed by the vehicle responsive to detecting the one or more autonomous driving adversity conditions (claims 2, 10, and 18) as well as localize the vehicle based on … (claim 6), which also constitute steps of judgments/evaluations based on observations and previous judgments/evaluations which again falls within the “mental process” groupings of abstract ideas. This judicial exception is not integrated into a practical application. Additional elements of “receiving … an indication that a second trajectory determined by the autonomous driving system … is associated with a potential collision” (claims 3, 11, and 19), “generating … a graphical representation of the trajectory projected on the image data” (claims 5 and 13) both amount to no more than adding insignificant extra-solution activity related to inputting and outputting data. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Additional element of “generating … control instructions to navigate the vehicle through the trajectory” (claim 14) amounts to mere instructions to apply an exception because the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. Thus, it is an equivalent of “apply it”. See MPEP 2106.05(f) (1). Moreover, the additional elements of “executing … the trained one or more machine learning model responsive to receiving the indication that the second trajectory is associated with the potential collision” (claims 3, 11, and 19), “the image data includes at least one of one or more images captured by a camera of the vehicle or one or more images captured by a light detection and ranging (LIDAR) system of the vehicle” (claims 4, 12, and 20), “the autonomous driving adversity conditions include at least one of: a road work zone or a construction zone obscured or missing road markings; or one or more obstacles on or alongside a road segment traveled by the vehicle” (claims 7 and 15), and “the machine learning model includes at least one of: a neural network; a random forest; a statistical classifier; a Naive Bayes classifier; or a hierarchical clusterer” (claims 8 and 16) are all mere specifics related to the data used, the adversity conditions detected, or the machine learning model used and also do not serve to integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, the additional element of “executing the one or more trained machine learning model …” in claims 2, 3, 6, 10, 11, 18, and 19 with no specific practical application or additional details merely links the use of an exception to a technological environment and still does not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional “receiving” and “generating … a graphical representation …” steps are further considered well-understood, routine, and conventional in view of the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicating that mere collection, receipt, or transmittal of data over a network as well as mere presenting of information are all well-understood, routine, conventional functions when claimed in a merely generic manner. Mere insignificant extra-solution activity cannot provide an inventive concept. Neither can adding an equivalent of “apply it” nor can the link of the use of the abstract idea to a certain technological environment. All of these additional elements, as generically claimed, are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these dependent claims are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 7-12, and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tran, US Patent No. 10,928,830 B1 (hereinafter as Tran). Regarding independent claim 1, Tran discloses a method [see col. 1, e.g. line 37 and line 51] comprising: receiving, by a computer system including one or more processors [note the exemplary computer system 710 in fig. 2B including processor 712], sensor data of a vehicle that is traveling [note col. 6, lines 18-21 indicating the capturing of images from a plurality of cameras in the vehicle], the sensor data indicative of one or more autonomous driving adversity conditions and the sensor data including image data depicting surroundings of the vehicle [note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacles; note that obstacles constitute autonomous driving adversity conditions] and inertial measurement data of the vehicle [note e.g. in col. 10, lines 38-40 the inertial measurement unit (IMU) 728; see also line 53 indicating inertial acceleration]; executing, by the computer system, one or more trained machine learning models, to detect one or more autonomous adversity conditions using the sensor data, wherein the one or more autonomous driving adversity conditions include conditions that increase a likelihood of failure for an autonomous driving system of the vehicle in determining a safe driving path trajectory, and the one or more machine learning models are trained to detect the one or more autonomous adversity conditions with a first subset of training data associated with the one or more autonomous driving adversity conditions and a second subset of the training data associated with autonomous driving non-adversity conditions [note e.g. in col. 5, lines 41-50 the trained neural network that makes safe/reasonable decisions for an autonomous car based on imminent accident versus normal traffic conditions (adversity versus non-adversity training data); see also col. 16, lines 34-44 indicating neural network learning that includes the reduction of dangerous incidents; see also col. 31, lines 12-13 and col. 67, lines 37-38 for training the neural network]; executing, by the computer system, the one or more trained machine learning models, to predict, using the sensor data, a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation; see also col. 2, lines 40-43 and col. 15, lines 60-62]; and providing, by the computer system, control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, wherein the autonomous driving system of the vehicle is configured to control the vehicle according to the control instructions to travel along the predicted trajectory through the one or more autonomous driving adversity conditions [note e.g. in col. 46, lines 4-8 indicating preparing a vehicle to respond based on a detected object and its likely behavior; see also in lines 33-46 of col. 46 the indication of providing recommendation/commands/driving options based on previously learned circumstances including driving adversity conditions and accompanying navigation predictions; see e.g. col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation; see also col. 45, lines 1-19 and especially note on lines 10-18 indicating controlling the vehicle during navigation; note also the Vehicle Control Block 220 and the Control System 706 (including a navigation/pathing system 748) depicted in figs. 2A and 2B respectively; see also the description especially in col. 11, lines 22-42]. Regarding independent claim 9, Tran also discloses a system [note the exemplary computer system 710 in fig. 2B] comprising: at least one processor [see e.g. processor 712 in fig. 2B]; and a non-transitory computer readable medium storing computer [see e.g. data storage 714 for instructions 716 also shown in fig. 2B as part of the system 710; note the exemplary non-transitory computer readable medium described in col. 1, lines 22-42], which when executed cause the system to: receive sensor data of a vehicle that is traveling[note col. 6, lines 18-21 indicating the capturing of images from a plurality of cameras in the vehicle], the sensor data indicative of one or more autonomous driving adversity conditions and the sensor data including image data depicting surroundings of the vehicle [note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacles; note that obstacles constitute autonomous driving adversity conditions] and inertial measurement data of the vehicle [note e.g. in col. 10, lines 38-40 the inertial measurement unit (IMU) 728; see also line 53 indicating inertial acceleration]; executing, by the computer system, one or more trained machine learning models, to detect one or more autonomous adversity conditions using the sensor data, wherein the one or more autonomous driving adversity conditions include conditions that increase a likelihood of failure for an autonomous driving system of the vehicle in determining a safe driving path trajectory, and the one or more machine learning models are trained to detect the one or more autonomous adversity conditions with a first subset of training data associated with the one or more autonomous driving adversity conditions and a second subset of the training data associated with autonomous driving non-adversity conditions [note e.g. in col. 5, lines 41-50 the trained neural network that makes safe/reasonable decisions for an autonomous car based on imminent accident versus normal traffic conditions (adversity versus non-adversity training data); see also col. 16, lines 34-44 indicating neural network learning that includes the reduction of dangerous incidents; see also col. 31, lines 12-13 and col. 67, lines 37-38 for training the neural network]; execute the one or more trained machine learning models, to predict, using the sensor data, a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation; see also col. 2, lines 40-43 and col. 15, lines 60-62]; and provide control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, wherein the autonomous driving system of the vehicle is configured to control the vehicle according to the control instructions to travel along the predicted trajectory through the one or more autonomous driving adversity conditions [note e.g. in col. 46, lines 4-8 indicating preparing a vehicle to respond based on a detected object and its likely behavior; see also in lines 33-46 of col. 46 the indication of providing recommendation/commands/driving options based on previously learned circumstances including driving adversity conditions and accompanying navigation predictions; see e.g. col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation; see also col. 45, lines 1-19 and especially note on lines 10-18 indicating controlling the vehicle during navigation; note also the Vehicle Control Block 220 and the Control System 706 (including a navigation/pathing system 748) depicted in figs. 2A and 2B respectively; see also the description especially in col. 11, lines 22-42]. Regarding independent claim 17, AAA teaches a non-transitory computer-readable medium comprising computer instructions [note the exemplary non-transitory computer readable medium described in col. 1, lines 22-42], the computer instructions when executed by one or more processors [see e.g. processor 712 in fig. 2B] cause the one or more processors to: receive sensor data of a vehicle that is traveling [note col. 6, lines 18-21 indicating the capturing of images from a plurality of cameras in the vehicle], the sensor data indicative of one or more autonomous driving adversity conditions and the sensor data including image data depicting surroundings of the vehicle [note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacles; note that obstacles constitute autonomous driving adversity conditions] and inertial measurement data of the vehicle [note e.g. in col. 10, lines 38-40 the inertial measurement unit (IMU) 728; see also line 53 indicating inertial acceleration]; execute one or more trained machine learning models, to detect one or more autonomous adversity conditions using the sensor data, wherein the one or more autonomous driving adversity conditions include conditions that increase a likelihood of failure for an autonomous driving system of the vehicle in determining a safe driving path trajectory, and the one or more machine learning models are trained to detect the one or more autonomous adversity conditions with a first subset of training data associated with the one or more autonomous driving adversity conditions and a second subset of the training data associated with autonomous driving non-adversity conditions [note e.g. in col. 5, lines 41-50 the trained neural network that makes safe/reasonable decisions for an autonomous car based on imminent accident versus normal traffic conditions (adversity versus non-adversity training data); see also col. 16, lines 34-44 indicating neural network learning that includes the reduction of dangerous incidents; see also col. 31, lines 12-13 and col. 67, lines 37-38 for training the neural network]; execute the one or more trained machine learning models, to predict, using the sensor data, a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation; see also col. 2, lines 40-43 and col. 15, lines 60-62]; and provide control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, wherein the autonomous driving system of the vehicle is configured to control the vehicle according to the control instructions to travel along the predicted trajectory through the one or more autonomous driving adversity conditions [note e.g. in col. 46, lines 4-8 indicating preparing a vehicle to respond based on a detected object and its likely behavior; see also in lines 33-46 of col. 46 the indication of providing recommendation/commands/driving options based on previously learned circumstances including driving adversity conditions and accompanying navigation predictions; see e.g. col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation; see also col. 45, lines 1-19 and especially note on lines 10-18 indicating controlling the vehicle during navigation; note also the Vehicle Control Block 220 and the Control System 706 (including a navigation/pathing system 748) depicted in figs. 2A and 2B respectively; see also the description especially in col. 11, lines 22-42]. Regarding claims 2 and 10, the rejection of independent claims 1 and 9 are respectively fully incorporated. Tran further discloses: executing, by the computer system, the trained one or more machine learning models to detect the one or more autonomous driving adversity conditions using the vehicle sensor data [note in col. 6, lines 18-21 indicating the use of neural networks to recognize objects in the captured images; note on the top of col. 45 the description of the obstacle detection; see also the obstacle avoidance system 750 shown in fig. 2B and described on lines 8-11 of col. 12; again, note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacle (which constitute autonomous driving adversity conditions)]; and executing, by the computer system, the one or more trained machine learning models to predict the trajectory to be followed by the vehicle responsive to detecting the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions) which may result from detecting an obstacle (like a stalled vehicle or another road user); note e.g. in col. 15, lines the use of neural networks to navigate the vehicle around an obstacle to avoid collisions; see also col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation]. Regarding claims 3 and 11, the rejection of independent claims 1 and 9 are respectively fully incorporated. Tran further discloses: receiving, by the computer system, an indication that a second trajectory determined by the autonomous driving system of the vehicle is associated with a potential collision [see e.g. in col. 10, lines 19022 indicating a steering input to prevent a potential collision; note also in col. 12, lines 8-11 the obstacle avoidance system that identifies, evaluates, and avoids or negotiates an obstacle in the environment of the vehicle; see also col. 64, lines 62-64 describing an identified potential collision event]; and executing, by the computer system, the one or more trained machine learning models responsive to receiving the indication that the second trajectory is associated with the potential collision [note e.g. in col. 15, lines the use of neural networks to navigate the vehicle around an obstacle to avoid collisions; see also the portions cited in the previous limitation]. Regarding claims 4, 12, and 20, the rejection of independent claims 1, 9, and 17 are respectively fully incorporated. Tran further discloses that the image data includes at least one of: one or more images captured by a camera of the vehicle and one or more images captured by a light detection and ranging (LIDAR) system of the vehicle [see e.g. col. 6, lines 18-19 and 29-37 indicating images captured by the vehicle sensors including different kinds of cameras and a LIDAR of the vehicle; see also col. 5, lines 8-12]. Regarding claims 7 and 15, the rejection of independent claims 1 and 9 are respectively fully incorporated. Tran further discloses that the autonomous driving adversity conditions include at least one of: a road work zone or a construction zone obscured or missing road markings; and one or more obstacles on or alongside a road segment traveled by the vehicle [note on the top of col. 45 the description of the obstacle detection; see also the obstacle avoidance system 750 shown in fig. 2B and described on lines 8-11 of col. 12; again, note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacle (which constitute autonomous driving adversity conditions)]. Regarding claims 8 and 16, the rejection of independent claims 1 and 9 are respectively fully incorporated. Tran further discloses that the one or more trained machine learning models includes at least one of: a neural network [again, see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation]; a random forest; a statistical classifier; a Naive Bayes classifier; or a hierarchical clusterer. Regarding claim 14, the rejection of independent claim 9 is fully incorporated. Tran further discloses that predicting the trajectory includes generating, by the one or more machine learning models, control instructions to navigate the vehicle through the trajectory [see e.g. col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation; see also col. 45, lines 1-19 and especially note on lines 10-18 indicating controlling the vehicle during navigation; note also the Vehicle Control Block 220 and the Control System 706 (including a navigation/pathing system 748) depicted in figs. 2A and 2B respectively; see also the description especially in col. 11, lines 22-42]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Tran (as applied to claims 1 and 9 above, respectively) in view of Love et al., US Patent No. 11,874,127 B1 (hereinafter as Love). Regarding claims 5 and 13, the rejection of independent claims 1 and 9 are respectively fully incorporated. Tran does not explicitly teach that predicting the trajectory includes generating, by the machine learning model, a graphical representation of the trajectory projected on the image data. Love teaches predicting a trajectory that includes generating, by a machine learning model, a graphical representation of the trajectory projected on image data [see title; figs. 1A-B and figs. 5A-D; see also col. 2, lines 25-27; col. 9, lines 61-63 indicating generating graphical representations; especially note the visual representations of the vehicle routes described in col. 14, lines 39-55 and col. 23, lines 48-51; see also col. 21, lines 35-41 indicating dynamically updating the map display with vehicle routes (trajectories) and their specific turn-by-turn details]. It would have been obvious to one of ordinary skill in the art having the teachings of Tran and Love, before the effective filing date of the claimed invention, to modify the prediction of the trajectory taught by Tran by explicitly specifying that it includes generating, by the machine learning model, a graphical representation of the trajectory projected on the image data, as per the teachings of Love. The motivation for this obvious combination of teachings would be to enable providing a dynamically updated visual display that shows the predicted trajectories for the vehicle overlaid on preexisting image data, which would facilitate a clear and user-friendly depiction of the route, as suggested by Love [again, see col. 14, lines 39-55 and col. 21, lines 35-41] which would provide an overall improved user experience by combining the strength of machine learning predictions and already existing visualizations [see also col. 28, lines 33-36]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Tran (as applied to claim 1 above) in view of Rohani et al., US Patent No. 10,796,204 B2 (hereinafter as Rohani). Regarding claim 6, the rejection of independent claim 1 is fully incorporated. Although Tran further teaches localizing the vehicle based upon the inertial measurement data and the image data [see col. 10, lines 51-53 and col. 13, lines 15-18], Tran does not explicitly teach executing the one or more trained machine learning models to localize the vehicle based upon the inertial measurement data and the image data. Rohani teaches executing one or more trained machine learning models to localize a vehicle based upon inertial measurement data and image data [note in col. 1, line 65-col. 2, line 1 and col. 2, lines 12-16, 50-59 the use of machine learning for a variety of functional tasks relating to vehicle control which includes locating the vehicle using a variety of sensor data including image and IMU data; especially see col. 10, and note the motion planning functional layer that involves considering the position of the vehicle; see also col. 3, lines 6-9 and col. 11, lines 58-67; note from col. 14, lines 54-67 and col. 16, lines 36-42 as well as figs. 5-6 the use of neural networks in controlling and implementing each of the operable elements]. It would have been obvious to one of ordinary skill in the art having the teachings of Tran and Rohani, before the effective filing date of the claimed invention, to modify the localization of the vehicle based upon the inertial measurement data and the image data taught by Tran by explicitly specifying the execution or one or more machine learning models to perform it, as per the teachings of Rohani. The motivation for this obvious combination of teachings would be to enable accounting for a variety of situations by leveraging the use of machine learning techniques, as suggested by Rohani [see col. 1, lines 19-53] which would provide an overall improved robustness by combining the strength of machine learning predictions sensor data calculation. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-4, 7, and 8 of the instant application are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/221,830 (hereinafter the Copending Application) in view of Tran. This is a provisional nonstatutory double patenting rejection. Regarding independent claim 1 of the instant application, Claim 4 of The Copending Application claims a method [see preamble of claim 1 of the Copending Application] comprising: receiving, by a computer system including one or more processors, sensor data of a vehicle that is traveling, the sensor data indicative of one or more autonomous driving adversity conditions and the sensor data including image data depicting surroundings of the vehicle and inertial measurement data of the vehicle [see the 1st limitation of claim 1 of the Copending Application]; training, by the computer system, one or more machine learning models to detect one or more autonomous adversity conditions using the sensor data, wherein the one or more autonomous driving adversity conditions include conditions that increase a likelihood of failure for an autonomous driving system of the vehicle in determining a safe driving path trajectory, and the one or more machine learning models are trained to detect the one or more autonomous adversity conditions with a first subset of training data associated with the one or more autonomous driving adversity conditions and a second subset of the training data associated with autonomous driving non-adversity conditions [see the first and last limitations of claim 1 of the Copending Application]; and training, by the computer system, one or more machine learning models to predict, using the sensor data, a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions [see the last limitation of claim 1 of the Copending Application]. Claim 4 of the Copending Application does not explicitly claim executing the trained machine learning model to detect …. or predict …. , using the sensor data, …. Neither does it explicitly teach providing, by the computer system, control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, wherein the autonomous driving system of the vehicle is configured to control the vehicle according to the control instructions to travel along the predicted trajectory through the one or more autonomous driving adversity conditions. Tran teaches: executing, by the computer system, one or more trained machine learning models, to detect one or more autonomous adversity conditions using the sensor data, wherein the one or more autonomous driving adversity conditions include conditions that increase a likelihood of failure for an autonomous driving system of the vehicle in determining a safe driving path trajectory, and the one or more machine learning models are trained to detect the one or more autonomous adversity conditions with a first subset of training data associated with the one or more autonomous driving adversity conditions and a second subset of the training data associated with autonomous driving non-adversity conditions [note e.g. in col. 5, lines 41-50 the trained neural network that makes safe/reasonable decisions for an autonomous car based on imminent accident versus normal traffic conditions (adversity versus non-adversity training data); see also col. 16, lines 34-44 indicating neural network learning that includes the reduction of dangerous incidents; see also col. 31, lines 12-13 and col. 67, lines 37-38 for training the neural network]; executing, by the computer system, the one or more trained machine learning models, to predict, using the sensor data, a trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation; see also col. 2, lines 40-43 and col. 15, lines 60-62]; and providing, by the computer system, control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, wherein the autonomous driving system of the vehicle is configured to control the vehicle according to the control instructions to travel along the predicted trajectory through the one or more autonomous driving adversity conditions [note e.g. in col. 46, lines 4-8 indicating preparing a vehicle to respond based on a detected object and its likely behavior; see also in lines 33-46 of col. 46 the indication of providing recommendation/commands/driving options based on previously learned circumstances including driving adversity conditions and accompanying navigation predictions; see e.g. col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation; see also col. 45, lines 1-19 and especially note on lines 10-18 indicating controlling the vehicle during navigation; note also the Vehicle Control Block 220 and the Control System 706 (including a navigation/pathing system 748) depicted in figs. 2A and 2B respectively; see also the description especially in col. 11, lines 22-42]. It would have been obvious to one of ordinary skill in the art having the Patent claims and the teachings of Tran before the effective filing date of the claimed invention to modify the framework recited by claim 1 of the Copending Application by specifying executing the machine learning model to detect the adversity conditions, predict the trajectory to be followed by the vehicle through the one or more autonomous driving adversity conditions, and provide control instructions to navigate the vehicle according to the predicted trajectory to an autonomous driving system of the vehicle, as per the teachings of Tran. The motivation for this obvious combination of teachings would be to enable the autonomous driving system to be immune to runtime occlusions from dynamic objects by adapting autonomously during the drive, as suggested by Tran [see e.g. col. 7, lines 27-47]. Regarding claim 2 of the instant application, the nonstatutory double patenting rejection of claim 1 is incorporated. Claim 1 of the Copending Application does not explicitly claim: executing, by the computer system, the trained one or more machine learning models to detect the one or more autonomous driving adversity conditions using the vehicle sensor data; and executing, by the computer system, the trained one or more machine learning models to predict the trajectory to be followed by the vehicle responsive to detecting the one or more autonomous driving adversity conditions. Tran further teaches: executing, by the computer system, the trained one or more machine learning models to detect the one or more autonomous driving adversity conditions using the vehicle sensor data [note in col. 6, lines 18-21 indicating the use of neural networks to recognize objects in the captured images; note on the top of col. 45 the description of the obstacle detection; see also the obstacle avoidance system 750 shown in fig. 2B and described on lines 8-11 of col. 12; again, note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacle (which constitute autonomous driving adversity conditions)]; and executing, by the computer system, the trained one or more machine learning models to predict the trajectory to be followed by the vehicle responsive to detecting the one or more autonomous driving adversity conditions [see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions) which may result from detecting an obstacle (like a stalled vehicle or another road user); note e.g. in col. 15, lines the use of neural networks to navigate the vehicle around an obstacle to avoid collisions; see also col. 5, lines 12-17 indicating using a trained neural networks for vehicle navigation]. Refer to the nonstatutory double patenting rejection of the independent claim for motivations to combine the claim recitations and the teachings of Tran. Regarding claim 3 of the instant application, the nonstatutory double patenting rejections of claim 1 is incorporated. Claim 1 of the Copending Application does not explicitly claim: receiving, by the computer system, an indication that a second trajectory determined by the autonomous driving system of the vehicle is associated with a potential collision; and executing, by the computer system, the trained one or more machine learning models responsive to receiving the indication that the second trajectory is associated with the potential collision. Tran further teaches: receiving, by the computer system, an indication that a second trajectory determined by the autonomous driving system of the vehicle is associated with a potential collision [see e.g. in col. 10, lines 19022 indicating a steering input to prevent a potential collision; note also in col. 12, lines 8-11 the obstacle avoidance system that identifies, evaluates, and avoids or negotiates an obstacle in the environment of the vehicle; see also col. 64, lines 62-64 describing an identified potential collision event]; and executing, by the computer system, the trained one or more machine learning models responsive to receiving the indication that the second trajectory is associated with the potential collision [note e.g. in col. 15, lines the use of neural networks to navigate the vehicle around an obstacle to avoid collisions; see also the portions cited in the previous limitation]. Refer to the nonstatutory double patenting rejection of the independent claim for motivations to combine the claim recitations and the teachings of Tran. Regarding claim 4 of the instant application, the nonstatutory double patenting rejections of claim 1 is incorporated. Claim 1 of the Copending Application does not explicitly claim that the image data includes at least one of: one or more images captured by a camera of the vehicle and one or more images captured by a light detection and ranging (LIDAR) system of the vehicle. Tran further teaches that the image data includes at least one of: one or more images captured by a camera of the vehicle and one or more images captured by a light detection and ranging (LIDAR) system of the vehicle [see e.g. col. 6, lines 18-19 and 29-37 indicating images captured by the vehicle sensors including different kinds of cameras and a LIDAR of the vehicle; see also col. 5, lines 8-12]. Refer to the nonstatutory double patenting rejection of the independent claim for motivations to combine the claim recitations and the teachings of Tran. Regarding claim 7 of the instant application, the nonstatutory double patenting rejections of claim 1 is incorporated. Claim 1 of the Copending Application does not explicitly claim that the autonomous driving adversity conditions include at least one of: a road work zone or a construction zone obscured or missing road markings; and one or more obstacles on or alongside a road segment traveled by the vehicle. Tran further teaches that the autonomous driving adversity conditions include at least one of: a road work zone or a construction zone obscured or missing road markings; and one or more obstacles on or alongside a road segment traveled by the vehicle [note on the top of col. 45 the description of the obstacle detection; see also the obstacle avoidance system 750 shown in fig. 2B and described on lines 8-11 of col. 12; again, note e.g. in col. 11, lines 54-61 the description indicating that the captured images include depicted vehicle surroundings such as traffic signals and obstacle (which constitute autonomous driving adversity conditions)]. Refer to the nonstatutory double patenting rejection of the independent claim for motivations to combine the claim recitations and the teachings of Tran. Regarding claim 8 of the instant application, the nonstatutory double patenting rejections of claim 1 is incorporated. Claim 1 of the Copending Application does not explicitly claim that the machine learning model includes at least one of: a neural network. Tran further teaches that the machine learning model includes at least one of: a neural network [again, see e.g. in col. 5, lines 41-50 the use of a trained neural network to make a driving decision including a lane change (which is a prediction of a trajectory to be followed) responsive to the recognition of an imminent accident (which is an example of one or more autonomous driving adversity conditions which may result from a detected obstacle (like a stalled vehicle or other road user); note in col. 5, lines 12-13 the use of the trained neural network for navigation]; a random forest; a statistical classifier; a Naive Bayes classifier; or a hierarchical clusterer. Refer to the nonstatutory double patenting rejection of the independent claim for motivations to combine the claim recitations and the teachings of Tran. Claims 9-12 and 14-20 of the instant application are provisionally rejected (analogous to the rejections of claims 1-4 (for claims 9-12 and 17-20), and the rejections of claims 6-8 (for claims 14-16), above) on the ground of nonstatutory double patenting as being unpatentable over claim 11 of the Copending Application in view of Tran. This is a provisional nonstatutory double patenting rejection. Regarding independent claims 9 and 17 of the instant application, they are provisionally rejected on the ground of nonstatutory double patenting (as being unpatentable over claim 11 of the Copending Application in view of Tran) analogous to the nonstatutory double patenting rejection of independent claim 1. Please refe
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Mar 21, 2025
Non-Final Rejection — §101, §102, §103
Jun 03, 2025
Interview Requested
Jun 17, 2025
Examiner Interview Summary
Jun 27, 2025
Response Filed
Oct 03, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554263
DRONE-ASSISTED VEHICLE EMERGENCY RESPONSE SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12549436
INTERNET OF THINGS CONFIGURATION USING EYE-BASED CONTROLS
2y 5m to grant Granted Feb 10, 2026
Patent 12474181
METHOD FOR GENERATING DIAGRAMMATIC REPRESENTATION OF AREA AND ELECTRONIC DEVICE THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12443856
DECISION INTELLIGENCE SYSTEM AND METHOD
2y 5m to grant Granted Oct 14, 2025
Patent 12443272
Proactive Actions Based on Audio and Body Movement
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
50%
With Interview (+17.1%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 159 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month