Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,175

SYSTEMS AND METHODS FOR AUTOMATICALLY DETECTING ANOMALOUS DRIVING PATTERNS IN VEHICLES

Final Rejection §103
Filed
Jan 29, 2024
Examiner
SIENKO, TANYA CHRISTINE
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Verizon Patent and Licensing Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
167 granted / 195 resolved
+33.6% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
14 currently pending
Career history
209
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
26.5%
-13.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 195 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims Claims 1-20 are pending in the application. Claim Rejections 35 USC § 112- The rejection of claim 6 under 35 USC § 112 has been addressed in the amendment and is removed. Response to Arguments Applicant’s arguments, see Remarks, filed 12/30/2025, with respect to the rejection(s) of independent claims 1, 8, and 15 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Sindhwani and in view of Klaus . See below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5, 8, 10, 12-13, 15-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0082292 (Sindhwani et al., hence Sindhwani) and in light of “Anomaly Detection in the latent Space of VAEs”, attached as NPL-Klaus.pdf, henceforth “Klaus.” As for claim 1, Sindhwani teaches a method (Sindhwani: Figs 8-9) comprising: receiving, by a device, historical input data associated with trips traversed by a plurality of vehicles with vehicle tracking units (VTUs) (Sindhwani: see steps 802-806, where telemetry during a trip of multiple autonomous vehicle is collected and sent to a "telemetry collection engine" of an anomaly detection system. "From a start block, the method 800 proceeds to block 802, where, for a plurality of autonomous vehicles, a telemetry collection engine 316 of each autonomous vehicle 300 receives telemetry information during a trip from one or more vehicle state sensor device(s) 304 of the autonomous vehicle 300. A trip (sometimes referred to as a "flight" or "mission") describes an action taken by the autonomous vehicle 300..."[0052]. also see [0053]-[0055] for more details as to how the data gets sent to a telemetry data store for further use in model training.) processing, by the device, the historical input data to generate training data; (Sindhwani: "By storing the time series data records in the telemetry data store 716, a large set of time series data records may be collected for model training purposes. It is worth nothing that the time series data records need not be tagged as normal or anomalous, as the remainder of the method 800 will automatically detect the anomalous time series data records in the training data and treat them appropriately." [0055]); training, by the device, a neural network model, with the training [data], (Sindhwani: Fig. 9, blocks 902-904. "At block 810, a model training engine 708 of the anomaly detection system 702 initializes a machine learning model having a set of fitting weights and a set of time series data record weights. The anomaly detection system 702 may be configured to train and use any suitable type of machine learning model for detecting anomalies." [0056]); receiving, by the device, input data associated with a trip traversed by a vehicle of the plurality of vehicles with the VTUs (Sindhwani: “"At block 912, the telemetry collection engine 706 of the anomaly detection system 702 receives a new time series data record containing telemetry information from an autonomous vehicle 300. In some embodiments, the new time series data record may represent an entire trip, or may represent a portion of a trip. In some embodiments, the new time series data record may be transmitted during the trip by the autonomous vehicle 300 so that anomalies can be detected in real-time." [0094]); processing, by the device, the input data to generate time series data (Sindhwani: the transmission of time series data record (block 806 in Fig. 8) per trip from a telemetry collection engine (block 802) implies some sort of preparation of the latter to form the former); comparing, by the device, [to] determine whether the trip is anomalous or not anomalous (Sindhwani: "At block 914, the anomaly detection engine 710 processes the new time series data record using the machine learning model. In some embodiments, the machine learning model takes the new time series data record as input and outputs an anomaly score, which is compared to the anomaly threshold value." [0095]. (comparison and determination as to whether an anomaly has been detected see [[0096]-[0097]) ; and performing, by the device, one or more actions based on the determination of whether the trip is anomalous or not anomalous. (Sindhwani: "If an anomaly has been detected, then the result of decision block 916 is YES, and the method 800 advances to block 918, where the anomaly detection engine 710 transmits a command to address the anomaly to the autonomous vehicle 300. Any suitable command may be transmitted. For example, in some embodiments, the anomaly detection engine 710 may determine an action to be taken in response to the anomaly, and may transmit a command to the autonomous vehicle 300 to cause the autonomous vehicle 300 to perform the action to address the anomaly. In some embodiments, the action may be at least one of rescheduling a future trip, navigating to an emergency repair location, and immediately performing a landing procedure. In some embodiments, the action may include accepting remote control from a human operator to address the anomaly." [[0097]) Sindhwani does not specifically mention training, by the device, a neural network model, with the training data, to generate a trained neural network model that provides a latent space representation of vectors; or clustering, by the device, the latent space representation of vectors to generate clusters, each of the clusters representing a type or a class. However, Klaus teaches training, by the device, a neural network model, with the training data, to generate a trained neural network model that provides a latent space representation of vectors (Simon: Fig. 4.6 shows the overall architecture. The latent space is the section in the middle. See pgs. 26-27. note the construction of the pinch-point "bottleneck" in the center, which is latent space. Also note that this system is a VAE, which is a variant of an auto-encoder (AE), which is a neural network. (See beginning of Background and 2.0.1 "Autoencoder" plus "RoadAnomaly21" was also included as anomaly data for training.(pg. 32) Each complete dataset was split into one set for training, one set for testing, and one set for evaluation.) Klaus also teaches clustering, by the device, the latent space representation of vectors to generate clusters, each of the clusters representing a type or a class. (Klaus: "This form of the KL-divergence is used in the CL-VAE to condition the latent space to form multiple clusters around the assigned Gaussians. As shown in [61], the CL-VAE is capable of producing a latent space where the data forms multiple clusters, depending on the class label." (pg. 25). Also see Figure 5.8, where the image on the right shows clustering in the latent space, with one cluster being normal data, while the other cluster shows abnormal data.(on pg. 38)) It would have been obvious to one of ordinary skill in the art at the time of the invention to use the neural-network/autoencoder of Simon in the system of Sindhwani with a reasonable expectation of success. Both inventions involve using recorded data to train a neural network to detect anomalies, albeit Simon explains in much greater detail the use of a Variable Autoencoder and the different additional activities to encourage data mapped into latent space to form into clusters for later anomaly identification. As for claim 2, Sindhwani, modified by Klaus, teaches comparing the determination of whether the trip is anomalous or not anomalous with historical determinations included in a data structure. (Sindhwani: "By storing the time series data records in the telemetry data store 716, a large set of time series data records may be collected for model training purposes. It is worth nothing that the time series data records need not be tagged as normal or anomalous, as the remainder of the method 800 will automatically detect the anomalous time series data records in the training data and treat them appropriately." [0055]) As for claim 5, Sindhwani, modified by Klaus, teaches including anomalous trips in the training data to cause the trained neural network model to generate an anomaly cluster.(Klaus: See beginning of Background and 2.0.1. "Autoencoder" plus "RoadAnomaly21" was also included as anomaly data for training.(pg. 32) Each complete dataset was split into one set for training, one set for testing, and one set for evaluation.) As for claim 8, Sindhwani teaches a device, comprising: one or more processors configured to: (Sindhwani: "In some embodiments, a system is provided. The system comprises at least one computing device that includes at least one processor a non-transitory computer-readable medium. The computer-readable medium has logic stored thereon that, in response to execution by the at least one processor, causes the system to perform actions comprising: receiving a time series data record from a monitored system; processing the time series data record using a machine learning model to generate an anomaly score, wherein the machine learning model was trained on a plurality of previous time series data records..." [0006]) receive historical input data associated with trips traversed by a plurality of vehicles with vehicle tracking units (VTUs) (Sindhwani: see steps 802-806, where telemetry during a trip of multiple autonomous vehicle is collected and sent to a "telemetry collection engine" of an anomaly detection system. "From a start block, the method 800 proceeds to block 802, where, for a plurality of autonomous vehicles, a telemetry collection engine 316 of each autonomous vehicle 300 receives telemetry information during a trip from one or more vehicle state sensor device(s) 304 of the autonomous vehicle 300. A trip (sometimes referred to as a "flight" or "mission") describes an action taken by the autonomous vehicle 300..."[0052]. also see [0053]-[0055] for more details as to how the data gets sent to a telemetry data store for further use in model training.); process the historical input data to generate training data (Sindhwani: "By storing the time series data records in the telemetry data store 716, a large set of time series data records may be collected for model training purposes. It is worth nothing that the time series data records need not be tagged as normal or anomalous, as the remainder of the method 800 will automatically detect the anomalous time series data records in the training data and treat them appropriately." [0055]); train a neural network model, with the training data, to generate a trained [neural network model] (Sindhwani: Fig. 9, blocks 902-904. "At block 810, a model training engine 708 of the anomaly detection system 702 initializes a machine learning model having a set of fitting weights and a set of time series data record weights. The anomaly detection system 702 may be configured to train and use any suitable type of machine learning model for detecting anomalies." [0056].); receive input data associated with a trip traversed by a vehicle of the plurality of vehicles with the VTUs (Sindhwani: "At block 912, the telemetry collection engine 706 of the anomaly detection system 702 receives a new time series data record containing telemetry information from an autonomous vehicle 300. In some embodiments, the new time series data record may represent an entire trip, or may represent a portion of a trip. In some embodiments, the new time series data record may be transmitted during the trip by the autonomous vehicle 300 so that anomalies can be detected in real-time." [0094]); process the input data to generate time series data (Sindhwani: the transmission of time series data record (block 806 in Fig. 8) per trip from a telemetry collection engine (block 802) implies some sort of preparation of the latter to form the former); compare the time series data [to] determine whether the trip is anomalous or not anomalous (Sindhwani: "At block 914, the anomaly detection engine 710 processes the new time series data record using the machine learning model. In some embodiments, the machine learning model takes the new time series data record as input and outputs an anomaly score, which is compared to the anomaly threshold value." [0095].) and perform one or more actions based on the determination of whether the trip is anomalous or not anomalous. (Sindhwani: "If an anomaly has been detected, then the result of decision block 916 is YES, and the method 800 advances to block 918, where the anomaly detection engine 710 transmits a command to address the anomaly to the autonomous vehicle 300. Any suitable command may be transmitted. For example, in some embodiments, the anomaly detection engine 710 may determine an action to be taken in response to the anomaly, and may transmit a command to the autonomous vehicle 300 to cause the autonomous vehicle 300 to perform the action to address the anomaly. In some embodiments, the action may be at least one of rescheduling a future trip, navigating to an emergency repair location, and immediately performing a landing procedure. In some embodiments, the action may include accepting remote control from a human operator to address the anomaly." [0097]) Sindhwani does not specifically mention train a neural network model, with the training data, to generate a trained neural network model that provides a latent space representation of vectors; or clustering the latent space representation of vectors to generate clusters. However, Klaus teaches [to] train a neural network model, with the training data, to generate a trained neural network model that provides a latent space representation of vectors (Klaus: Fig. 4.6 shows the overall architecture. The latent space is the section in the middle. See pgs. 26-27. note the construction of the pinch-point "bottleneck" in the center, which is latent space. Also note that this system is a VAE, which is a variant of an auto-encoder (AE), which is a neural network. (See beginning of Background and 2.0.1 "Autoencoder" plus "RoadAnomaly21" was also included as anomaly data for training.(pg. 32) Each complete dataset was split into one set for training, one set for testing, and one set for evaluation.) Klaus also teaches clustering the latent space representation of vectors to generate clusters , each of the clusters representing a type or a class (Klaus: "This form of the KL-divergence is used in the CL-VAE to condition the latent space to form multiple clusters around the assigned Gaussians. As shown in [61], the CL-VAE is capable of producing a latent space where the data forms multiple clusters, depending on the class label." (pg. 25). Also see Figure 5.8, where the image on the right shows clustering in the latent space, with one cluster being normal data, while the other cluster shows abnormal data.(on pg. 38)) It would have been obvious to one of ordinary skill in the art at the time of the invention to use the neural-network/autoencoder of Simon in the system of Sindhwani with a reasonable expectation of success. Both inventions involve using recorded data to train a neural network to detect anomalies, albeit Klaus explains in much greater detail the use of a Variable Autoencoder and the different additional activities to encourage data mapped into latent space to form into clusters for later anomaly identification. As for claim 10, Sindhwani, as modified by Klaus, teaches wherein the historical input data includes data identifying one or more of: geographical positions of the plurality of vehicles over a time period, speeds of the plurality of vehicles over the time period, accelerations of the plurality of vehicles over the time period, headings of the plurality of vehicles over the time period, proximities of the plurality of vehicles to intersections over the time period, or types of roads traversed by the plurality of vehicles over the time period. (SIndhwani: "In some embodiments, the autonomous vehicle 300 is configured to collect telemetry data and transmit the collected telemetry data to an anomaly detection system. In some embodiments, the autonomous vehicle 300 is configured to receive commands from the anomaly detection system in the event of an anomaly being detected, and to take appropriate action to address the anomaly. In some embodiments, the autonomous vehicle 300 is an aircraft. In other embodiments, any other type of autonomous vehicle 300 capable of navigating along a route, such as a wheeled vehicle, may be used."[0032]) As for claim 12, Sindhwani, in light of Klaus, teaches wherein the neural network model is a variational autoencoder model. (Klaus: uses both anomalous and non-anomalous training data in a Variational Autoencoder to detect anomalies in an image (see Section 2.02 and Section 4.). As for claim 13, Sindhwani, as modified by Klaus, teaches wherein the clusters identify different types of trips traversed by the plurality of vehicles. (Klaus: see Figure 5.8, where the image on the right shows clustering in the latent space, with one cluster being normal data, while the other cluster shows abnormal data.(on pg. 38). These would correspond to a normal type of a trip as opposed to an abnormal trip given that the training data combines cityscapes with normal images (which would correspond to a normal trip) and abnormal images (which would correspond to an abnormal trip.) As for claim 15, Sindhwani, as modified by Klaus, teaches a non-transitory computer-readable medium storing a set of instructions, (Sindhwani: In some embodiments, a non-transitory computer readable medium is provided. The computer-readable medium has logic stored thereon that, in response to execution by one or more processors of a computing system, cause the computing system to perform actions for detecting anomalies in time series data records."[0004]) the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: receive input data associated with a trip traversed by a vehicle (Sindhwani: see steps 802-806, where telemetry during a trip of multiple autonomous vehicle is collected and sent to a "telemetry collection engine" of an anomaly detection system. "From a start block, the method 800 proceeds to block 802, where, for a plurality of autonomous vehicles, a telemetry collection engine 316 of each autonomous vehicle 300 receives telemetry information during a trip from one or more vehicle state sensor device(s) 304 of the autonomous vehicle 300. A trip (sometimes referred to as a "flight" or "mission") describes an action taken by the autonomous vehicle 300..."[0052]. also see [0053]-[0055] for more details as to how the data gets sent to a telemetry data store for further use in model training); process, by the device, the input data to generate time series data (Sindhwani: the transmission of time series data record (block 806 in Fig. 8) per trip from a telemetry collection engine (block 802) implies some sort of preparation of the latter to form the former); compare, via a neural network model that is trained with historical input data associated with trips traversed by a plurality of vehicles with vehicle tracking units (VTUs) [to] determine whether the trip is anomalous or not anomalous (Sindhwani: "At block 914, the anomaly detection engine 710 processes the new time series data record using the machine learning model. In some embodiments, the machine learning model takes the new time series data record as input and outputs an anomaly score, which is compared to the anomaly threshold value." [0095]. (comparison and determination as to whether an anomaly has been detected see [[0096]-[0097]) ; and perform one or more actions based on the determination of whether the trip is anomalous or not anomalous. (Sindhwani: "If an anomaly has been detected, then the result of decision block 916 is YES, and the method 800 advances to block 918, where the anomaly detection engine 710 transmits a command to address the anomaly to the autonomous vehicle 300. Any suitable command may be transmitted. For example, in some embodiments, the anomaly detection engine 710 may determine an action to be taken in response to the anomaly, and may transmit a command to the autonomous vehicle 300 to cause the autonomous vehicle 300 to perform the action to address the anomaly. In some embodiments, the action may be at least one of rescheduling a future trip, navigating to an emergency repair location, and immediately performing a landing procedure. In some embodiments, the action may include accepting remote control from a human operator to address the anomaly." [[0097]) Sindhwani does not specifically mention compare the time series data and clusters to determine whether the trip is anomalous or not anomalous, each of the clusters representing a type or a class, wherein the clusters are generated via a neural network model. However, Klaus teaches compare the time series data and clusters to determine whether the trip is anomalous or not anomalous, each of the clusters representing a type or a class, wherein the clusters are generated via a neural network model (Simon: Fig. 4.6 shows the overall architecture. The latent space is the section in the middle. See pgs. 26-27. note the construction of the pinch-point "bottleneck" in the center, which is latent space. Also note that this system is a VAE, which is a variant of an auto-encoder (AE), which is a neural network. (See beginning of Background and 2.0.1 "Autoencoder" plus "RoadAnomaly21" was also included as anomaly data for training.(pg. 32) Each complete dataset was split into one set for training, one set for testing, and one set for evaluation; "This form of the KL-divergence is used in the CL-VAE to condition the latent space to form multiple clusters around the assigned Gaussians. As shown in [61], the CL-VAE is capable of producing a latent space where the data forms multiple clusters, depending on the class label." (pg. 25). Also see Figure 5.8, where the image on the right shows clustering in the latent space, with one cluster being normal data, while the other cluster shows abnormal data.(on pg. 38)) It would have been obvious to one of ordinary skill in the art at the time of the invention to use the neural-network/autoencoder of Simon in the system of Sindhwani with a reasonable expectation of success. Both inventions involve using recorded data to train a neural network to detect anomalies, albeit Simon explains in much greater detail the use of a Variable Autoencoder and the different additional activities to encourage data mapped into latent space to form into clusters for later anomaly identification. As for claim 16, Sindhwani, as modified by Klaus, teaches wherein the one or more instructions further cause the device to compare the determination of whether the trip is anomalous or not anomalous with historical determinations included in a data structure; (SIndhwani: If the data is stored on a database for reuse then it will have been stored using a particular data structure; " FIG. 4-FIG. 6 include several charts that illustrate non-limiting example embodiments of telemetry information collected by an autonomous vehicle according to various aspects of the present disclosure. The charts in FIG.4-FIG. 6 illustrate telemetry information collected by vehicle state sensor device(s) 304 of an autonomous vehicle 300. For each line in each chart, the telemetry information may be provided as a time series of values generated for the given characteristic. In some embodiments, a group of multiple time series such as the illustrated time series may be collected to create a time series data record for a given time period."[0040]) As for claim 19, Sindhwani, as modified by Klaus, teaches wherein the one or more instructions further cause the device to: include anomalous trips in the historical input data to cause the neural network model to generate an anomaly cluster. (Klaus: See 4.5 for training explanations of using both regular datasets and anomaly—containing datasets. Also see Fig. 5.8 which shows how the anomalous data gets clustered out into its own cluster in latent space. (far-right diagram)) Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani, in light of Klaus, as applied to claim 1 above, and further in view of “A Deep-Convolutional-Neural-Network-Based Semi-Supervised Learning Method for Anomaly Crack Detection”, see attached NPL-Gao.pdf, henceforth “Gao”. As for claim 3, Sindhwani, as modified by Klaus, does not specifically teach expanding the clusters based on feedback and to address falsely detected not anomalous trips. (Klaus mentions the number of false positives but does not go into detail as to how to use the false positives in feedback to improve detection.) However, Gao teaches expanding the clusters based on feedback and to address falsely detected not anomalous [data] (Gao: "Semi-supervised learning combines supervised learning with unsupervised learning together, and it can train classifiers with few labeled samples. Typical semi-supervised learning algorithms, for example, self-training [21], hybrid models [22], graph-based ones [23] and SVM-based [24] ones can be applied to wall crack detection [25], pavement crack detection [26] and steel structure surface defect detection [27]. During the training, classifiers are trained by a small amount of labeled data and then are employed to classify a great amount of unlabeled data. Subsequently, the mislabeled samples are picked out and corrected manually and are reused as the training data in the next round." (pg. 2). The result of this reuse would change what are considered the boundaries of the clusters identified as anomalous/non-anomalous since the label attached to any one misclassified data point would have changed. Therefore, a false positive would have been identified by the system as being in a non-normal area when it was, in fact, normal. After reclassification, the location in latent space at which the false positive data exists will have shifted from being identified as non-normal to normal, thus expanding the normal cluster.) It would have been obvious to one of ordinary skill in the art at the time of the application to use the retraining techniques of Gao in the system of Klaus which is then used in Sindhwani. Gao’s algorithms are being used to identify cracks (which are considered anomalies) in images of buildings, but Gao points out the technique of semi-supervised anomaly detection (using neural networks) has already been used in such differing areas as cancer detection, ultrasound detection, and disease detection of industrial products (pg. 2). The motivation would be able to improve the anomaly detection ability and reduce the instances of false positives. As for claim 4, Sindhwani, as modified by Klaus, and by Gao, teaches reducing the clusters based on feedback and to address falsely detected anomalous trips. (Gao: "Semi-supervised learning combines supervised learning with unsupervised learning together, and it can train classifiers with few labeled samples. Typical semi-supervised learning algorithms, for example, self-training [21], hybrid models [22], graph-based ones [23] and SVM-based [24] ones can be applied to wall crack detection [25], pavement crack detection [26] and steel structure surface defect detection [27]. During the training, classifiers are trained by a small amount of labeled data and then are employed to classify a great amount of unlabeled data. Subsequently, the mislabeled samples are picked out and corrected manually and are reused as the training data in the next round." (pg. 2). The result of this reuse would change what are considered the boundaries of the clusters identified as anomalous/non-anomalous since the label attached to any one misclassified data point would have changed. Therefore, a false negative would have been identified by the system as being in a normal area when it was, in fact, anomalous. After reclassification, the location in latent space at which the false negative data exists will have shifted from being identified as normal to anomalous, thus shrinking the normal cluster.) Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani, in light of Klaus, as applied to claim 1 above, and further in view of US Pat. 11,840,244, (Levy et al., hence Levy.) As for claim 7, Sindhwani, as modified by Klaus, does not specifically teach wherein performing the one or more actions comprises one or more of: generating an alert for a driver of the vehicle based on the determination that the trip is anomalous; or generating an alert for a fleet manager of the vehicle based on the determination that the trip is anomalous. However, Levy teaches wherein performing the one or more actions comprises one or more of: generating an alert for a driver of the vehicle based on the determination that the trip is anomalous; or generating an alert for a fleet manager of the vehicle based on the determination that the trip is anomalous. (Levy: "... the fleet anomaly detector 130 may be configured to cause, in real-time, implementation of at least one mitigation action for mitigating the cyber threat. The fleet anomaly detector 130 may be configured to send instructions for implementing the mitigation actions to the fleet manager 160, to any of the vehicle control systems 170, to a server used for providing connected vehicle services ( e.g., a server of a traffic control service), among the data sources 140, and the like. " Col. 7, lines 16-24.) It would have been obvious to one of ordinary skill in the art at the time of the application to add on the alert for the fleet manager, as outlined by Levy, in the system of Sindhwani, as modified by Klaus. The motivation would be to expand the warnings within the system. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani in light of Klaus as applied to claim 8 above, and further in view of Gao. As for claim 9, neither Sindhwani nor Klaus teach wherein the one or more processors, to perform the one or more actions, are configured to one or more of: cause video for the vehicle to be recorded based on the determination that the trip is anomalous. Nor do they specifically teach retrain the neural network model based on the determination of whether the trip is anomalous or not anomalous. However, Gao teaches retrain the neural network model based on the determination of whether the trip is anomalous or not anomalous. (Gao: This would be retraining, as per Gao, due to a false positive case or a false negative case. See explanations above for retraining in the false positive case (claim 3) or the false negative case (claim 4).) It would have been obvious to one of ordinary skill in the art at the time of the application to use the retraining techniques of Gao in the system of Sindhwani, as modified by Klaus. Gao’s algorithms are being used to identify cracks (which are considered anomalies) in images of buildings, but Gao points out the technique of semi-supervised anomaly detection (using neural networks) has already been used in such differing areas as cancer detection, ultrasound detection, and disease detection of industrial products (pg. 2). The motivation would be able to improve the anomaly detection ability and reduce the instances of false positives/negatives. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani in light of Klaus as applied to claim 8 above, and further in view of Bakheet. As for claim 11, Sindhwani, as modified by Klaus, teaches wherein the one or more processors, to process the historical input data to generate the training data, are configured to: generate temporally ordered sets of feature vectors based on the historical input data (Sindhwani: "FIG. 4-FIG. 6 include several charts that illustrate non-limiting example embodiments of telemetry information collected by an autonomous vehicle according to various aspects of the present disclosure. The charts in FIG.4-FIG. 6 illustrate telemetry information collected by vehicle state sensor device(s) 304 of an autonomous vehicle 300. For each line in each chart, the telemetry information may be provided as a time series of values generated for the given characteristic. In some embodiments, a group of multiple time series such as the illustrated time series may be collected to create a time series data record for a given time period."[0040]); Sindhwani does not specifically teach normalize[ing] the temporally ordered sets of feature vectors to generate the training data. However, this is known in the art, as is shown by Bakheet. (Bakheet: "Since the video sequences are composed of different number of frames, the number of history levels in MHIs might still differ from one sequence to another. To appropriately compare the video sequences, it is essential that the multi-level MHI (MMHI) approach allows all MHIs to be constructed with a fixed number 𝓁 of history levels." Section 4.1, Temporal template formation, at bottom of the page.) It would have been obvious to one of ordinary skill in the art at the time of the application to uses normalization of the data, as shown in Bakheet, in the system of Sindhwani as modified by Klaus. Normalization of data is carried out in order to be able to use data from different data sets and expand the range of data used. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani in light of Klaus as applied to claim 8 above, and further in view of CA 3232488/WO 2023/041907 (Horry et al., hence Horry). As for claim 14, Sindhwani, as modified by Klaus teaches wherein the one or more processors, to compare the time series data and the clusters to determine whether the trip is anomalous or not anomalous, are configured to: determine a proximity of the trip with the clusters; (Klaus: using distances to determine anomalies is known in the art: “Anomalies are detected by calculating the softmax score and compare it to a certain threshold which indicates if an image is a outlier or not.”(pg. 11). Neither Sindhwani nor Klaus specifically mention a threshold distance, but comparison of such is known in the art, as is shown by Horry: determine that the trip is anomalous based on determining that the proximity of the trip with the clusters fails to satisfy a threshold distance (Horry: "The method may further comprise setting at least one anomaly threshold; wherein an anomaly score above the at least one anomaly threshold is indicative of anomalous behaviour whereby the behaviour of the engineering asset may be classified as anomalous."(pg. 6 lines 26-29) Note that the anomaly score is calculated from a Mahalanobis distance (pg. 4 lines 5-10) ; or determin[ing] that the trip is not anomalous based on determining that the proximity of the trip with the clusters satisfies the threshold distance. (Horry: "An anomaly score below the anomaly threshold may be indicative of behaviour which is not anomalous." (pg. 6 line 29-pg 7 line 1)) It would have been obvious to one of ordinary skill in the art at the time of the application to use the comparison of threshold distances as outlined by Horry in the system of Sindhwani, as modified by Klaus. The motivation would be to use standard sorting techniques. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani, in light of Klaus, as applied to claim 15 above, and further in view of Gao. As for claim 17, Sindhwani, as modified by Klaus, does not specifically teach wherein the one or more instructions further cause the device to: expand the clusters based on feedback and to address falsely detected not anomalous trips. However, Gao teaches wherein the one or more instructions further cause the device to: expand the clusters based on feedback and to address falsely detected not anomalous [data]. (Gao: "Semi-supervised learning combines supervised learning with unsupervised learning together, and it can train classifiers with few labeled samples. Typical semi-supervised learning algorithms, for example, self-training [21], hybrid models [22], graph-based ones [23] and SVM-based [24] ones can be applied to wall crack detection [25], pavement crack detection [26] and steel structure surface defect detection [27]. During the training, classifiers are trained by a small amount of labeled data and then are employed to classify a great amount of unlabeled data. Subsequently, the mislabeled samples are picked out and corrected manually and are reused as the training data in the next round." (pg. 2). The result of this reuse would change what are considered the boundaries of the clusters identified as anomalous/non-anomalous since the label attached to any one misclassified data point would have changed. Therefore, a false positive would have been identified by the system as being in a non-normal area when it was, in fact, normal. After reclassification, the location in latent space at which the false positive data exists will have shifted from being identified as non-normal to normal, thus expanding the normal cluster.) It would have been obvious to one of ordinary skill in the art at the time of the application to use the retraining techniques of Gao in the system of Sindhwani, as modified by Klaus. Gao’s algorithms are being used to identify cracks (which are considered anomalies) in images of buildings, but Gao points out the technique of semi-supervised anomaly detection (using neural networks) has already been used in such differing areas as cancer detection, ultrasound detection, and disease detection of industrial products (pg. 2). The motivation would be able to improve the anomaly detection ability and reduce the instances of false positives. As for claim 18, Sindhwani, as modified by Klaus, and modified by Gao, teaches wherein the one or more instructions further cause the device to: reduce the clusters based on feedback and to address falsely detected anomalous trips. (Gao: "Semi-supervised learning combines supervised learning with unsupervised learning together, and it can train classifiers with few labeled samples. Typical semi-supervised learning algorithms, for example, self-training [21], hybrid models [22], graph-based ones [23] and SVM-based [24] ones can be applied to wall crack detection [25], pavement crack detection [26] and steel structure surface defect detection [27]. During the training, classifiers are trained by a small amount of labeled data and then are employed to classify a great amount of unlabeled data. Subsequently, the mislabeled samples are picked out and corrected manually and are reused as the training data in the next round." (pg. 2). The result of this reuse would change what are considered the boundaries of the clusters identified as anomalous/non-anomalous since the label attached to any one misclassified data point would have changed. Therefore, a false negative would have been identified by the system as being in a normal area when it was, in fact, anomalous. After reclassification, the location in latent space at which the false negative data exists will have shifted from being identified as normal to anomalous, thus shrinking the normal cluster.) Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Sindhwani, in light of Klaus, as applied to claim 15 above, and further in view of Levy. As for claim 20, Sindhwani, as modified by Klaus, does not specifically teach wherein the one or more instructions, that cause the device to perform the one or more actions, cause the device to one or more of: schedule a driver of the vehicle for training based on the determination that the trip is anomalous; cause emergency services to be dispatched for the vehicle based on the determination that the trip is anomalous; generate an alert for a driver of the vehicle based on the determination that the trip is anomalous; generate an alert for a fleet manager of the vehicle based on the determination that the trip is anomalous, cause video for the vehicle to be recorded based on the determination that the trip is anomalous; or retrain the neural network model based on the determination of whether the trip is anomalous or not anomalous. However, Levy teaches wherein the one or more instructions, that cause the device to perform the one or more actions (Levy: "The memory contains instructions that can be executed by the processing circuitry. The instructions, when executed by the processing circuitry, configure the fleet anomaly detector 130 to secure fleets of connected vehicles against cyber-attacks by detecting anomalous fleet behavior and causing mitigation actions as described herein." (Col. 5, lines 60-65)), cause the device to one or more of: generating an alert for a driver of the vehicle based on the determination that the trip is anomalous; or generating an alert for a fleet manager of the vehicle based on the determination that the trip is anomalous. (Levy: "... the fleet anomaly detector 130 may be configured to cause, in real-time, implementation of at least one mitigation action for mitigating the cyber threat. The fleet anomaly detector 130 may be configured to send instructions for implementing the mitigation actions to the fleet manager 160, to any of the vehicle control systems 170, to a server used for providing connected vehicle services ( e.g., a server of a traffic control service), among the data sources 140, and the like. " Col. 7, lines 16-24.) It would have been obvious to one of ordinary skill in the art at the time of the application to add on the alert for the fleet manager, as outlined by Levy, in the system of Sindhwani, as modified by Klaus. The motivation would be to expand the warnings within the system. Allowable Subject Matter Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TANYA CHRISTINE SIENKO whose telephone number is (571)272-5816. The examiner can normally be reached Mon - Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 571-270-3912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TANYA C SIENKO/Examiner, Art Unit 3664 /KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Sep 23, 2025
Non-Final Rejection — §103
Dec 05, 2025
Interview Requested
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 30, 2025
Response Filed
Mar 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594946
DRIVER ASSISTANCE APPARATUS AND DRIVER ASSISTANCE METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12576820
VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12552289
SYSTEMS AND METHODS FOR PRE-CONDITIONING A VEHICLE
2y 5m to grant Granted Feb 17, 2026
Patent 12539853
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12528506
TELEOPERATION OF A VEHICLE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+15.7%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 195 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month