Prosecution Insights
Last updated: April 19, 2026
Application No. 18/321,580

SYSTEMS AND METHODS FOR PROVIDING CUSTOMIZED DRIVING EVENT PREDICTIONS USING A MODEL BASED ON GENERAL AND USER FEEDBACK LABELS

Non-Final OA §103
Filed
May 22, 2023
Examiner
STANDKE, ADAM C
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Verizon Patent and Licensing Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
74%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
61 granted / 123 resolved
-5.4% vs TC avg
Strong +25% interview lift
Without
With
+24.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
39 currently pending
Career history
162
Total Applications
across all art units

Statute-Specific Performance

§101
18.9%
-21.1% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 123 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Campos et al., US 12,468,269 B2(“Campos”) in view of Muron et al., US 2024/0161508 Al(“Muron”) and in view of Liang et al. "Effective adaptation in multi-task co-training for unified autonomous driving." Advances in Neural Information Processing Systems 35 (2022)(“Liang”). Regarding claim 1, Campos teaches a method, comprising: receiving, by a device, a customer identifier and video data identifying videos associated with driving events of vehicles associated with a customer(Campos, cols. 21-22, see also fig. 7, “[A] DRIVER!™ system may continuously record video and other sensor data while a vehicle is running...[f]IG. 7 illustrates a traffic event report that was generated in response to a reported accident. In the incident that is the subject of the report, a truck came into contact with a car that was idling ahead of it at a traffic-light. A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[receiving, by a device, a customer identifier and video data identifying videos associated with driving events of vehicles associated with a customer].”); processing, by the device, the video data, with a feature extraction model, to generate features of the videos(Campos, col. 19, “[A] video caption generation system may be trained on a series of frames. The video capture generation system may be based on a Recurrent Neural Network (RNN) structure, which may use Long Short-Term Memory (LSTM) modules to capture temporal aspects of a traffic event[processing, by the device, the video data, with a feature extraction model, to generate features of the videos]”); processing, by the device, the customer identifier, [with an embedding layer], to transform the customer identifier to an input(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[processing, by the device, the customer identifier to transform the customer identifier to an input]);1 optimizing, by the device, model weights for a classifier machine learning model and a customizer machine learning model to generate optimized model weights; processing, by the device, the features, with the classifier machine learning model, to generate general predictions for the videos; processing, by the device, the features, the input, and the general predictions, with the customizer machine learning model, to generate customer specific predictions(Campos, col. 12, “[D]etecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data. For example, the output of a neural network may be used to identify other cars in the vicinity. FIGS. 3A-D and FIGS. 4A-D illustrate examples of systems and methods of detecting driving actions that mitigate risk[processing, by the device, the features, with the classifier machine learning model, to generate general predictions for the videos;]...[d]eterminations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event[processing, by the device, the features, the input, and the general predictions, with the customizer machine learning model, to generate customer specific predictions]”) training, by the device, the classifier machine learning model and the feature extraction model, [based on the first errors and the optimized model weights, to generate a trained classifier machine learning model and a trained feature extraction model](Campos, col. 12, “Detecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data. For example, the output of a neural network may be used to identify other cars in the vicinity. FIGS. 3A-D and FIGS. 4A-D illustrate examples of systems and methods of detecting driving actions that mitigate risk[training, by the device, the classifier machine learning model].” & Campos, col. 19, “[A] video caption generation system may be trained on a series of frames. The video capture generation system may be based on a Recurrent Neural Network (RNN) structure, which may use Long Short-Term Memory (LSTM) modules to capture temporal aspects of a traffic event[and the feature extraction model,].”);2 training, by the device, the customizer machine learning model [and the embedding layer, based on the second errors and the optimized model weights, to generate a trained customizer machine learning model and a trained embedding layer](Campos, col. 12, “Determinations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event[training, by the device, the customizer machine learning model]”);3 and implementing, by the device, the trained classifier machine learning model, the trained feature extraction model, the trained customizer machine learning model, [and the trained embedding layer](Campos, col. 12, “[D]etecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data. For example, the output of a neural network may be used to identify other cars in the vicinity. FIGS. 3A-D and FIGS. 4A-D illustrate examples of systems and methods of detecting driving actions that mitigate risk[and implementing, by the device, the trained classifier machine learning model]...[d]eterminations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event[the trained customizer machine learning model,]” & Campos, col. 19, “[A] video caption generation system may be trained on a series of frames. The video capture generation system may be based on a Recurrent Neural Network (RNN) structure, which may use Long Short-Term Memory (LSTM) modules to capture temporal aspects of a traffic event[the trained feature extraction model].”).4 While Campos does teach training, by the device, the classifier machine learning model and the feature extraction model; training, by the device, the customizer machine learning model; and implementing, by the device, the trained classifier machine learning model, the trained feature extraction model, the trained customizer machine learning model Campos does not teach: receiving, by the device, reviewer labels and user labels for the video data; calculating, by the device, first errors for the general predictions based on the reviewer labels; calculating, by the device, second errors for the customer specific predictions based on the user labels; based on the first errors and the optimized model weights, to generate a trained classifier machine learning model and a trained feature extraction model; based on the second errors and the optimized model weights, to generate a trained customizer machine learning model. However, Muron teaches: receiving, by the device, reviewer labels and user labels for the video data(Muron, para. 0422 see also fig. 15, “The evidence validation module 318 can evaluate the final score 1502 against one or more predetermined thresholds 1506 to determine whether the evidence package 136 is automatically approved, is automatically rejected, or requires further review...by a human reviewer or a further round of automatic review by the server 104[receiving, by the device, reviewer labels and user labels for the video data]....”); calculating, by the device, first errors for the general predictions based on the reviewer labels(Muron, paras. [0219-0221], see also fig. 3 and 15A-15C, “The evidence validation module 318 can calculate a final score 1502 based on the contributing scores 1500 and evaluate the final score 1502 against one or more predetermined thresholds 1506 to determine whether the evidence package 136 is automatically approved, is automatically rejected, or requires further review...[by] a further round of automatic review by the server 104[calculating, by the device, first errors for the general predictions based on the reviewer labels]....”); calculating, by the device, second errors for the customer specific predictions based on the user labels(Muron, paras. [0228-0230], see also figs. 3, and 15A-15C, “[A]t least one of the GUIs 332 can provide a live event feed of all flagged events or potential traffic violations and the validation status of such potential traffic violations... the client device 138 can be used by a human reviewer to review the evidence packages... [t]he human reviewer can input their review decision via an interactive feature ( e.g., by applying a user input to an "Approve" or "Reject" button or icon) displayed as part of at least one of the GUIs 332 of the web portal or mobile application 330[calculating, by the device, second errors for the customer specific predictions based on the user labels].”); [training, by the device, the classifier machine learning model and the feature extraction model], based on the first errors and the optimized model weights, to generate a trained classifier machine learning model and a trained feature extraction model(Muron, paras. [0370-0385], see also figs. 12, “As shown in FIG. 12, the weather and road condition classifier 313 can be or comprise a multi-headed neural network having a shared or single feature extractor and a plurality of prediction heads or decoders 1206[to generate a trained classifier machine learning model and a trained feature extraction model]... the training data 1226 can comprise event video frames captured by the edge devices 102 or event video frames stored in an events database 316. The event video frames retrieved from the events database 316 can be event video frames where the evidence packages 136 containing such video frames were previously validated by the server 104[based on the first errors and the optimized model weights]....”);5 [training, by the device, the customizer machine learning model and the embedding layer], based on the second errors and the optimized model weights, to generate a trained customizer machine learning model [and a trained embedding layer](Muron, paras. [0432-0433], see also figs. 15A-15C, “[T]he decision tree algorithm 328 can be a version of the XGBoost decision tree algorithm....[t]he decision tree algorithm 328 can be trained using context features 129 and classification results 127 obtained from past event video frames 124 and past license plate video frames 126 capturing past traffic violation events or past non-events/false-positive events that have been confirmed by a human reviewer[based on the second errors and the optimized model weights, to generate a trained customizer machine learning model].”);6 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos with the teachings of Muron the motivation to do so would be to implement a traffic violation system that takes into account evidentiary decisions(Muron, paras. [0005-0006], “[A]n improved computer-based traffic violation detection system is needed that can undertake certain evidentiary reviews automatically without relying on human reviewers and can take into account certain automatically detected contextual factors or features that may aid the system in determining whether a traffic violation has indeed occurred.”). While Campos in view of Muron do teach processing, by the device, the customer identifier to transform the customer identifier to an input; training, by the device, the customizer machine learning model based on the second errors and the optimized model weights, to generate a trained customizer machine learning model; and implementing, by the device, the trained classifier machine learning model, the trained feature extraction model, the trained customizer machine learning model Campos in view of Muron do not teach: with an embedding layer; and the embedding layer; and a trained embedding layer; and the trained embedding layer. However, Liang teaches: [processing, by the device, the customer identifier,] with an embedding layer [,to transform the customer identifier to an input](Liang, pg., 7, see also figs. 2 and 3, “To extract textual features of each class, class-specific prompts are constructed via a generator function and are fed into the text encoder... T ´ e ∈ R N × C ...TE is the text encoder, and n i i = 1 N is the class name embeddings[with an embedding layer].”)7 [training, by the device, the customizer machine learning model] and the embedding layer [,based on the second errors and the optimized model weights, to generate a trained customizer machine learning model] and a trained embedding layer(Liang, pgs., 7-9, see also fig. 2 and 3, “To extract textual features of each class, class-specific prompts are constructed via a generator function and are fed into the text encoder... T ´ e ∈ R N × C ...TE is the text encoder, and n i i = 1 N is the class name embeddings[and the embedding layer]... [f]or LV-Adapter, the learnable prompts are prepended to the class, and the length of prompts is 16[and a trained embedding layer].”);8 [and implementing, by the device, the trained classifier machine learning model, the trained feature extraction model, the trained customizer machine learning model, ] and the trained embedding layer(Liang, pgs., 7-9, see also fig. 2 and 3, “We pursue to underpin the compatibility between the semantic concepts of each task and the image features and generate semantically stronger contexts for downstream tasks. The resulting model, named LV-Adapter, is outlined in Figure 3... [t]o extract textual features of each class, class-specific prompts are constructed via a generator function and are fed into the text encoder...[f]or LV-Adapter, the learnable prompts are prepended to the class, and the length of prompts is 16[and the trained embedding layer]”)9 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the teachings of Liang the motivation to do so would be to combine textual information with visual information for better feature extraction(Liang, abstract, “Aiming towards a holistic understanding of multiple downstream tasks simultaneously, there is a need for extracting features with better transferability... and propose a novel adapter named LV-Adapter, which incorporates language priors in the multi-task model via task-specific prompting and alignment between visual and textual features.”). Regarding claim 2, Campos in view of Muron and Liang teaches the method of claim 1, wherein the customer identifier is associated with an industry type or account information(Campos, cols. 21-22, see also fig. 7, “FIG. 7 illustrates a traffic event[industry type] report that was generated in response to a reported accident. In the incident that is the subject of the report, a truck came into contact with a car that was idling ahead of it at a traffic-light. A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[wherein the customer identifier is associated with an industry type].”).10 Regarding claim 3, Campos in view of Muron and Liang teaches the method of claim 1, wherein processing the customer identifier, with the embedding layer, to transform the customer identifier to the input comprises: processing the customer identifier(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[processing the customer identifier].”), with the embedding layer(Liang, pgs., 7-9, see also fig. 2 and 3, “To extract textual features of each class, class-specific prompts are constructed via a generator function and are fed into the text encoder...TE is the text encoder[with the embedding layer]....”), to transform the customer identifier(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[to transform the customer identifier].”) to continuous vectors(Liang, pgs., 7-9, see also fig. 2 and 3, “We denote the normalized output features for N classes as T ^ e ∈ R N × C [to continuous vectors]....”).11 Regarding claim 4, Campos in view of Muron and Liang teaches the method of claim 1, wherein processing the customer identifier, with the embedding layer, to transform the customer identifier to the input comprises: processing the customer identifier(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[processing the customer identifier].”), with the embedding layer(Liang, pgs., 7-9, see also fig. 2 and 3, “To extract textual features of each class, class-specific prompts are constructed via a generator function and are fed into the text encoder...TE is the text encoder[with the embedding layer]....”), to transform the customer identifier(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[to transform the customer identifier].”) to an embedding(Liang, pgs., 7-9, see also fig. 2 and 3, “We denote the normalized output features for N classes as T ^ e ∈ R N × C [to an embedding]....”).12 Regarding claim 5, Campos in view of Muron and Liang teaches the method of claim 1, wherein optimizing the model weights for the classifier machine learning model and the customizer machine learning model to generate the optimized model weights comprises: modifying first weights(Liang, pg., 7, see also fig., 2, “During the adaptation stage, we are given the pretrained model weights...inherited from the pre-training stage. We aim to transform the model weights via a small amount of learnable parameters to adapt the knowledge of the pretrained weights towards multi-task scenarios... and tune the parameters of Feature Pyramid Network (FPN) supervised by the multi-task loss function in Equation 1.”) associated with the classifier machine learning model(Campos, col. 12, “[D]etecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data.”) to generate first modified weights(Liang, pg., 7, see also fig., 2, “During the adaptation stage, we are given the pretrained model weights...inherited from the pre-training stage. We aim to transform the model weights via a small amount of learnable parameters to adapt the knowledge of the pretrained weights towards multi-task scenarios... and tune the parameters of Feature Pyramid Network (FPN) supervised by the multi-task loss function in Equation 1.”); and not modifying second weights(Liang, pg., 7, see also fig., 2, “During the adaptation stage, we are given the pretrained model weights (e.g., ResNet-50) inherited from the pre-training stage...we freeze the parameters of the random initialized task-specific heads and the backbone.....”) associated with the customizer machine learning model(Campos, col. 12, “Determinations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event.”), wherein the first modified weights and the second weights correspond to the optimized model weights(Liang, pg., 5, “[W]e train a multi-task student model with a weighted sum of all objectives for each task: L t o t a l = α d e t L d e t + α s e m L s e m + α d r i v L d r i v ...[w]e conduct grid search in the range of [0.1, 1.0] with a step size of 0.1 to find the optimal loss weights setting and α d e t , α s e m , α d r i v are set to 1.0, 0.7, 0.7, respectively.”).13 Regarding claim 6, Campos in view of Muron and Liang teaches the method of claim 1, wherein optimizing the model weights for the classifier machine learning model and the customizer machine learning model to generate the optimized model weights comprises: modifying first weights(Liang, pg., 7, see also fig., 2, “During fine-tuning, all parameters are activated and updated via gradient descent.”) associated with the classifier machine learning model(Campos, col. 12, “[D]etecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data.”) to generate first modified weights(Liang, pg., 7, see also fig., 2, “During fine-tuning, all parameters are activated and updated via gradient descent.”); and modifying second weights(Liang, pg., 7, see also fig., 2, “During fine-tuning, all parameters are activated and updated via gradient descent.”) associated with the customizer machine learning model(Campos, col. 12, “Determinations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event.”) to generate second modified weights(Liang, pg., 7, see also fig., 2, “During fine-tuning, all parameters are activated and updated via gradient descent.”), wherein the first modified weights and the second modified weights correspond to the optimized model weights(Liang, pg., 5, “[W]e train a multi-task student model with a weighted sum of all objectives for each task: L t o t a l = α d e t L d e t + α s e m L s e m + α d r i v L d r i v ...[w]e conduct grid search in the range of [0.1, 1.0] with a step size of 0.1 to find the optimal loss weights setting and α d e t , α s e m , α d r i v are set to 1.0, 0.7, 0.7, respectively.”).14 Regarding claim 7, Campos in view of Muron and Liang teaches the method of claim 1, wherein one or more of the reviewer labels are different than one or more of corresponding user labels(Muron paras. [0229-0230], see also figs. 3 and 15A-15C, “The client device 138 can be used by a human reviewer to review the evidence packages 136 that were neither automatically approved nor automatically rejected by the evidence validation module 318[wherein one or more of the reviewer labels are different]...[t]he human reviewer can input their review decision via an interactive feature...by applying a user input to an [‘]Approve[’] or [‘]Reject[’] button or icon[than one or more of corresponding user labels]....”).15 Regarding claim 8, Campos teaches a device, comprising: one or more processors(Campos, col. 5, see also fig. 1, “The compute capability may be a CPU or an integrated System-on-a-chip (SOC), which may include a CPU and other specialized compute cores, such as a graphics processor (GPU), gesture recognition processor, and the like.”) configured to: receive a customer identifier and video data identifying videos associated with driving events of vehicles associated with a customer, wherein the customer identifier is associated with an industry type or account information(Campos, cols. 21-22, see also fig. 7, “[A] DRIVER!™ system may continuously record video and other sensor data while a vehicle is running[and video data identifying videos associated with driving events of vehicles]...FIG. 7 illustrates a traffic event[industry type] report that was generated in response to a reported accident. In the incident that is the subject of the report, a truck came into contact with a car that was idling ahead of it at a traffic-light. A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id[receive a customer identifier associated with a customer, wherein the customer identifier is associated with an industry type].”).16 And for all other claim limitations of claim 8 they are rejected on the same basis as independent claim 1 since they are analogous claims. Regarding claim 9, Campos in view of Muron and Liang teaches the device of claim 8, wherein the one or more processors, to calculate the first errors for the general predictions based on the reviewer labels, are configured to: identify differences between the general predictions and corresponding reviewer labels; and calculate the first errors based on the differences (Campos, col. 19, “A multi-layer perceptron may be trained on supervised training data to generate risk level labels... [t]he data used to train a learned model may be generated by a rule-based approach...[t]hese labels may be...rejected, or corrected by a human labeler[identify differences between the general predictions and corresponding reviewer labels]... [t]hese labels may then be used to bootstrap from the rule based approach to a machine learned model that exhibits improved performance[and calculate the first errors based on the differences].”). Regarding claim 10, Campos in view of Muron and Liang teaches the device of claim 8, wherein the one or more processors, to calculate the second errors for the customer specific predictions based on the user labels, are configured to: identify differences between the customer specific predictions and corresponding user labels; and calculate the second errors based on the differences(Campos, col. 19, “A multi-layer perceptron may be trained on supervised training data to generate risk level labels... [t]he data used to train a learned model may be generated by a rule-based approach... a fleet safety officer may correct a given action responsivity label[identify differences between the customer specific predictions and corresponding user labels]...[t]hese labels may then be used to bootstrap from the rule based approach to a machine learned model that exhibits improved performance[and calculate the second errors based on the differences].”). Regarding claim 11, Campos in view of Muron and Liang teaches the device of claim 8, wherein the one or more processors, to implement the trained classifier machine learning model, the trained feature extraction model, the trained customizer machine learning model, and the trained embedding layer, are configured to: receive new video data identifying a new video associated with a driving event of a vehicle associated with the customer(Campos, col. 12, see also fig. 3A-D and 4A-D, “FIG. 3A illustrates an example of detecting a driving action that mitigates risk in which tailgating is detected and the cause is assigned to the ego-driver (or "Driver"). In the video frames shown in FIGS. 3A-D and FIGS. 4A-D, the type of event and the determined cause is shown on the top of the video frame, along with additional information”); process the new video data, with the trained feature extraction model, to generate new features(Campos, col. 19, “[A] video caption generation system may be trained on a series of frames. The video capture generation system may be based on a Recurrent Neural Network (RNN) structure, which may use Long Short-Term Memory (LSTM) modules to capture temporal aspects of a traffic event.”); process the customer identifier(Campos, cols. 21-22, see also fig. 7, “A traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id), with the trained embedding layer, to generate a new input(Liang, pgs., 7-9, see also fig. 2 and 3, “We denote the normalized output features for N classes as T ^ e ∈ R N × C [to generate a new input]... TE is the text encoder... [t]he training epoch is fixed as 36... [d]uring the adaptation stage, the learning rate is set to 2.5 × 10 - 4 [with the trained embedding layer].); process the new features, with the trained classifier machine learning model, to generate a general prediction for the new video; process the new features, the new input, and the general prediction, with the trained customizer machine learning model, to generate a customer specific prediction(Campos, col. 12, “[D]etecting a driving action that mitigates risk... may be based on the output of a neural network trained on labeled data[process the new features, with the trained classifier machine learning model, to generate a general prediction for the new video]...[d]eterminations of cause of traffic events based on... neural networks may also be used to train a second neural network to detect and/or characterize traffic events and/or determine cause of a traffic event[process the new features, the new input, and the general prediction, with the trained customizer machine learning model, to generate a customer specific prediction]”); provide the general prediction and the customer specific prediction for display based on determining to provide the customer specific prediction for display(Campos, cols. 21-22, see also fig. 7, “FIG. 7 illustrates a traffic event report that was generated in response to a reported accident... [a] traffic incident report of such an incident may be generated based on a user's request, corresponding to a user-provided alert-id, which may be referred to as an incident-id... [t]he report illustrated in FIG. 7 also includes a trend of the daily driver score for this week and summary of alerts[provide the general prediction and the customer specific prediction for display based on determining to provide the customer specific prediction for display].”).17 Regarding claim 12, Campos in view of Muron and Liang teaches the device of claim 11, wherein the one or more processors are further configured to: determine whether the customer specific prediction satisfies a threshold metric; and determine to provide the customer specific prediction for display based on the customer specific prediction satisfying the threshold metric; or determine to not provide the customer specific prediction for display based on the customer specific prediction failing to satisfy the threshold metric(Muron, para. 0427, see also 15A, “In the scenario shown in FIG. 15A, three context features 129 ( e.g., plate_confidence, active_lane_occupancy, and plate_valid) and their accompanying classification results 127 are provided as inputs to the decision tree algorithm 328...[a]s shown in FIG. 15A, the final score of 3.9 exceeds the first threshold 1506A value of 2.0. As such, the evidence package 136 comprising the event video frames 124 and license plate video frames 126 that served as inputs for the various deep learning models that produced the context features 129 and classification results 127 shown in FIG. 15A is automatically approved by the server 104[determine whether the customer specific prediction satisfies a threshold metric; and determine to provide the customer specific prediction for display based on the customer specific prediction satisfying the threshold metric].”).18, 19 Regarding claim 13, Campos in view of Muron and Liang teaches the device of claim 11, wherein the customer specific prediction includes an indication of a coaching opportunity for a driver of the vehicle(Campos, cols. 19-20, “A DRIVER!™ system may serve as a driver advocate, by providing fleets with systems and methods to recognize and reward their drivers for exhibiting good driving behavior...[r]ather than focusing exclusively on collisions and near-collisions, with GreenZone™ monitoring, a fleet manager may be able to point out expert maneuvers by expert drivers in the fleet. Such recognition may strengthen the relationship between excellent drivers and a trucking company. In addition, examples of excellent driving may be used to instruct less experienced drivers[includes an indication of a coaching opportunity for a driver of the vehicle].”). Regarding claim 14, Campos in view of Muron and Liang teaches the device of claim 11, wherein the customer specific prediction includes a new customized label for the new video(Campos, col. 12, see also figs. 3A-D and 4A-D, “In the video frames shown in FIGS. 3A-D and FIGS. 4A-D, the type of event and the determined cause is shown on the top of the video frame, along with additional information. In FIG. 3A, the type of event is [‘]Tailgating[’], the determined cause is [‘]Cause: Driver[’] and the additional information is [‘]From Front[’][ includes a new customized label for the new video].”). Regarding claim 15, Campos teaches a non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to(Campos, col. 23, “Certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein.”). And for all other claim limitations of claim 15 they are rejected on the same basis as independent claim 1 since they are analogous claims. Referring to dependent claims 16-20, they are rejected on the same basis as dependent claims 5-6 and 9-11 since they are analogous claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Javeri et al. US 2022/0358950 Al (details models for predicting hazardous driving conditions through the use of audio data in which feature vectors are constructed and inputted into a classification system) Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM C STANDKE whose telephone number is (571)270-1806. The examiner can normally be reached Gen. M-F 9-9PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Adam C Standke/ Primary Examiner Art Unit 2129 1 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Campos. 2 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Campos. 3 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Campos. 4 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Campos. 5 Examiner Remarks: Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Muron. 6 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Muron. 7 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Liang. 8 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Liang. 9 Examiner Remarks: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are claim limitations that are not taught by the prior art of Liang. 10 According to the broadest reasonable interpretation (BRI), the use of alternative language amounts to the claim requiring one or more elements but not all. 11  It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the above teachings of Liang for the same rationale stated at Claim 1. 12 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the above teachings of Liang for the same rationale stated at Claim 1. 13 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the above teachings of Liang for the same rationale stated at Claim 1. 14 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the above teachings of Liang for the same rationale stated at Claim 1. 15 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos with the above teachings of Muron for the same rationale stated at Claim 1. 16 According to the broadest reasonable interpretation (BRI), the use of alternative language amounts to the claim requiring one or more elements but not all. 17 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos in view of Muron with the above teachings of Liang for the same rationale stated at Claim 8. 18 According to the broadest reasonable interpretation (BRI), the use of alternative language amounts to the claim requiring one or more elements but not all. 19 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Campos with the above teachings of Muron for the same rationale stated at Claim 8.
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596958
APPARATUS AND METHODS FOR MULTIPLE STAGE PROCESS MODELING
2y 5m to grant Granted Apr 07, 2026
Patent 12555026
INTERPRETABLE HIERARCHICAL CLUSTERING
2y 5m to grant Granted Feb 17, 2026
Patent 12547875
AUTOMATED SETUP AND COMMUNICATION COORDINATION FOR TRAINING AND UTILIZING MASSIVELY PARALLEL NEURAL NETWORKS
2y 5m to grant Granted Feb 10, 2026
Patent 12541704
MACHINE-LEARNING PREDICTION OR SUGGESTION BASED ON OBJECT IDENTIFICATION
2y 5m to grant Granted Feb 03, 2026
Patent 12541691
MIXUP DATA AUGMENTATION FOR KNOWLEDGE DISTILLATION FRAMEWORK
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
74%
With Interview (+24.8%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 123 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month