Prosecution Insights
Last updated: April 19, 2026
Application No. 18/367,771

SYSTEMS AND METHODS FOR SEQUENTIAL ANOMALY DETECTION IN IVNS USING A GRAPH-BASED STATE SPACE APPROACH

Non-Final OA §103
Filed
Sep 13, 2023
Examiner
CHOUAT, ABDERRAHMEN
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
77%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
195 granted / 267 resolved
+15.0% vs TC avg
Minimal +4% lift
Without
With
+4.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
16 currently pending
Career history
283
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 267 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation MPEP Section 2111.04 teach: I. "ADAPTED TO," "ADAPTED FOR," "WHEREIN," and "WHEREBY" Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) "adapted to" or "adapted for" clauses; (B) "wherein" clauses; and (C) "whereby" clauses. Response to Arguments Regarding applicants arguments directed at the 35 U.S.C 101 rejection. Examiner respectfully withdraws the rejection in light of the amendments. Regarding applicants arguments directed at the rejection of claim 1 under 35 U.S.C. 102(a)(2) as being anticipated by Zhang et al. (US 20240064160 A1). Examiner respectfully agrees that Zhang does not teach fully and with sufficient motivation the limitations of the independent claims, examiner respectfully enters Cohen et al. (US 20230054575 A1). Cohen teaches and wherein each of the plurality of states in the model includes the observed signal values (vehicle data points) of the signal content of two or more correlated signals (current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like) contained in the decoded training message sequences;([ [0041] In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., probability of state of gear being reverse, neutral, and/or the like) based on recent historical values of the message (e.g., 10-20 different data points prior to the current value, a predetermined range of historical values, and/or the like). In some embodiments, first layer categorical variable model(s) 220 predicts a probability associated with a classification for a given categorical variable message 264 (e.g., 80% probability of state of gear being 5.sup.th gear) based on current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like), previously observed data points for the correlated messages (e.g., 10-20 different data points prior to the current value of the correlated messages, a predetermined range of historical values), and/or the like. In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., 0.1% probability of state of gear being reverse) based on any combination of recent data points on historical values as well as additional correlated messages. In various embodiments, a current or historical classification or value of a categorical variable message 264 corresponds to a vehicle operational parameter 282, where the classification or value is a current or historical classification or value of the vehicle operational parameter) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-10, 12-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20240064160 A1) in view of Cohen et al. (US 20230054575 A1). Regarding claim 1, Zhang teaches a method of operating an anomaly detection system for a vehicle, the method comprising, using one or more processing devices: receiving and decoding (0071; Decode block 904 extracts message IDs 906 and data content 908 from messages 902. The data content is encoded into the payload of a message. This encoding is defined by message protocol 910. For example, several data values may be scaled to integer values and stored in designated bits of the payload. [0072-0073] FIG. 11 is a flow chart of a method 1100 of training an anomaly classifier of an IDS, in accordance with various representative embodiments. At block 1102, a sequence of messages is retrieved from a training dataset. 0036; Alternatively, data contents—such as signals and counters—may be extracted from message payloads using known message protocols. For example, a 64-bit payload may include four 15-bit values denoting encoded wheel speeds and a 4-bit checksum. Protocol information may be provided in a *. DBC database file or other format and stored for use by the IDS computer. Furthermore, 0120; teaching CAN signal extraction, furthermore signals are a data structure used in in vehicle networks, (see also 0046; and background section 0002 teaching an in-vehicle network CAN used for messaging)); claim 11; extracting the data content of an input feature vector from a message payload based on identified data block boundaries in the message payload; [0106] FIGS. 14 and 15 shows relevant CAN message content in a dataset collected while a vehicle accelerates from 0 to 30 mph (about 48 km/h) and then decelerates back to 0 mph. The target CAN ID is 254, which is related to vehicle speed information, and the dataset has 707 messages with this CAN ID. ) training message sequences corresponding to messages transmitted in an in-vehicle network (IVN); (Figs 2-3, 10; 0030; After generating the graph, a GNN is trained based on generated message graphs; [0031] When the communication is used in a vehicle or machine, the alarm may be used to alert the operator. The alarm may be used to initiate a mitigation process, such as entering a “safe mode” of operation. [0033] FIG. 2 shows an apparatus 200 for training an IDS, in accordance with various representative embodiments. Apparatus 200 receives anomaly-free messages from one or more datasets 202 and produces weight values 204 for anomaly detector 206. Message graph generator 208 receives the messages from datasets 202 and generates graph data and input feature vectors therefrom. The weight values are used in anomaly detector 206 and updated by weight updater 210. Various techniques for updating weight values of neural networks and classifiers are known in the art. [0034] FIG. 3 shows an apparatus 300 for training an IDS, in accordance with various representative embodiments. Apparatus 300 receives messages from one or more datasets 302 and produces weight values 304 for anomaly classifier 306. At least some of the messages in datasets 302 contain known anomalies corresponding to known network intrusions. Message graph generator 308 receives the messages from datasets 302 and generates graph data and input feature vectors therefrom. The weight values are used in anomaly classifier 306 and updated by weight updater 310. Various techniques for updating weight values or neural networks and classifiers are known in the art.) to obtain signal content of the messages ([0036] Alternatively, data contents—such as signals and counters—may be extracted from message payloads using known message protocols. For example, a 64-bit payload may include four 15-bit values denoting encoded wheel speeds and a 4-bit checksum. Protocol information may be provided in a *. DBC database file or other format and stored for use by the IDS computer. [0071] Feature vectors 410 contains node attributes—such as message data content—and edge attributes such as edge pair counts. Decode block 904 extracts message IDs 906 and data content 908 from messages 902. The data content is encoded into the payload of a message. This encoding is defined by message protocol 910. For example, several data values may be scaled to integer values and stored in designated bits of the payload. [0076] For example, CAN message contents may be divided based on the rate of bit-flips. The general idea is that for each signal semantic, the most significant bit in the related data block will vary much slower than the least significant one. In this case, if a bit with a high bit-flip rate is followed by one with a low rate, these two bits probably belong to two different signal semantics or data blocks.) constructing a model (GNN graph generator) based on the signal content obtained (examiner points to the above mapping showing how vehicle CAN messages are formatted as signals and messages and payload are extracted and therefore must have been decoded) from the decoded training message sequences, (Mapping above; GNN created from the graph generator 0031; GNN graph neural network model) (generating the graph model) (0071-0072] FIG. 10 is a flow chart of a method 1000 of training an anomaly detector of an IDS, in accordance with various representative embodiments. Graph data is initialized at block 1002 by setting the nodes of the graph. These can be determined from a training dataset or from a specification 1004 of the message protocol, for example. The dataset may only contain message sequences without anomalies. At block 1006, a sequence of messages is retrieved from the training dataset. At block 1008, the message IDs are decoded from the messages and edges are generated in the graph data for consecutive pairs of message IDs. In addition, the pair counts for the edges are generated by determining how many time each pair of IDs exists in the sequence. At block 1010, the data content of the messages is determined (again using protocol information 1004). The data content and the pair counts are used to generate a feature vector for each node of the graph. The feature vectors are processed through an anomaly detector that includes a graph neural net (GNN) and one-class classifier at block 1012. Weight values of the anomaly detector are updated at block 1014. A number of techniques for neural network weight updates are known to those of skill in the art. If there are more message sequences in the training data set, as depicted by the positive branch from decision block 1016, flow returns to block 1006 and the next message sequences is retrieved. Otherwise, as depicted by the negative branch from decision block 1016, the final updated weight values are stored for later use.) wherein the model includes a plurality of states (states and changes between state) corresponding to observed signal values of the signal content (examiner points to the above mapping showing how vehicle CAN messages are formatted as signals and messages and payload are extracted and therefore must have been decoded) in the decoded training message sequences and state transitions between respective states of the plurality of states; (0040; 0105; 0108; 0147 In fact, changes of vehicle states can also cause CAN message variations but in a reasonable way. Intuitively, CAN message contents will change to reflect different vehicle states. On the other hand, vehicle states can also affect CAN message sequences since some ECUs may not get activated all the time to prolong battery life in vehicles. For example, tire pressure sensors will sleep most of the time and wake up only when vehicles start to travel at high speeds (over 40 km/h), or during diagnosis and the initial CAN ID binding phases. 0108; Changes between different vehicle states) training the model by supplying, to the model, (i) first messages sequences corresponding to the decoded training message sequences and (0033-0034; training dataset; FIG. 2 shows an apparatus 200 for training an IDS, in accordance with various representative embodiments. Apparatus 200 receives anomaly-free messages from one or more datasets 202 and produces weight values 204 for anomaly detector 206. Message graph generator 208 receives the messages from datasets 202 and generates graph data and input feature vectors therefrom. The weight values are used in anomaly detector 206 and updated by weight updater 210.) (ii) second message sequences not contained in the decoded training message sequences; ( [0034] FIG. 3 shows an apparatus 300 for training an IDS, in accordance with various representative embodiments. Apparatus 300 receives messages from one or more datasets 302 and produces weight values 304 for anomaly classifier 306. At least some of the messages in datasets 302 contain known anomalies corresponding to known network intrusions. Message graph generator 308 receives the messages from datasets 302 and generates graph data and input feature vectors therefrom. The weight values are used in anomaly classifier 306 and updated by weight updater 310. Various techniques for updating weight values or neural networks and classifiers are known in the art.) subsequent to training the model, (0009-0010; Figs 2-3, 10-11, 0033-0035; 0038-0042; examiner points out most of the prior art recited methods for training and using an anomaly detection system) executing the trained model to identify anomalous message sequences transmitted in the IVN by ( See mapping above vehicle network; 0049; 0041; A CAN bus Intrusion Detection System (IDS) is disclosed that can efficiently detect CAN message injection/suspension and message falsification attacks at the same time. Instead of a simple combination of the above two traditional IDSs, a CAN message graph is used to integrate message contents with statistical message sequences in terms of CAN ID pairs.) (i) receiving a decoded IVN message sequence, (0038; IDS 400 includes first stage 412 that performs anomaly detection. The first stage includes a graph neural network (GNN) 416. The structure of GNN 416 is based on the graph structure 414. Input feature vectors 418 are processed by GNN 416 to produce output feature vectors 420. In turn, output feature vectors 420 are passed through a one-class classification layer 422 that determines if a sequence of messages contains normal data 424 or one or more anomalies 426. ) (ii) outputting, from the trained model, a value (Figs 14-17; 0071-0073; feature vectors and values output) based on state transitions between states of signals (transition of the state of the car) contained in the decoded IVN message sequence, and (Fig 16-17; 0035; To take advantage of crowdsourcing while protecting user data privacy, federated learning may be used to train a universal model that covers different driving scenarios and vehicle states. Extensive experiment results show the effectiveness and efficiency of the disclosed approach; 0106; In a single message interval, the vehicle is considered to be in a constant state, while two CAN messages from two different intervals may correspond to two different vehicle states. From FIG. 15 , an obvious value changes in the second and fourth data block can be seen, which are not reflected in FIG. 14 . Besides, the fourth data block seems to be related to the vehicle speed, which makes its value vary within a certain range considering the vehicle speed limit. In other words, message content changes with CAN ID 254 should have the above two features or be reasonable. Note that the value of the third data block actually jumps between around 463 (odd indices) and around 2511 (even indices), which is not considered as a change here.; 0107; FIGS. 16 and 17 show examples of how vehicle states can affect CAN message sequences. In this example, a dataset with 23,963 CAN messages is split into 239 message intervals with each having 100 messages. For each message interval, the number of times for each possible CAN ID pair appears is counted and the corresponding statistical message sequence is generated. First, all CAN message ID pairs in the dataset are recorded, then the two message sequences are compared based on cosine similarity. FIGS. 16 and 17 show comparison results under two situations. In the first scenario, any two consecutive CAN message sequences are compared. In detail, cosine similarity between 1st and 2nd message sequence is indexed in 1st interval pair, cosine similarity between 2nd and 3rd message sequence is indexed in 2nd interval pair, and so on. In the second scenario any two CAN message sequences with 50 message intervals apart are compared. Cosine similarity between 1st and 51st message sequence is indexed in 1st interval pair, cosine similarity between 2nd and 52nd message sequence is indexed in 2nd interval pair, and so on 0108; Based on the above two comparisons, a step change can be seen when two message sequences are 50 intervals apart, which cannot be observed in the first situation. In a short time-interval, i.e., when two consecutive message sequences are compared, the vehicle state does not change. In this case, message sequences vary within a certain range, which is reflected in FIG. 16 . On the other hand, two message sequences with 50 intervals apart will correspond to two different vehicle states. Since some ECUs, such as tire pressure sensors, will only get activated in some vehicle states, noticeable variations, such as the step change indicated by box 1702 in FIG. 17 , can happen. In addition, message sequence changes may further have intrinsic connections with some message content variations caused by vehicle state changes. Thus, it is desirable to collect data from as many driving scenarios and vehicle states as possible, which may have to come from different vehicles considering the limitations discussed above. Federated learning can then be applied for model training while protecting data privacy. (iii) outputting, based on the value, an indication of whether the decoded IVN message sequence includes an anomalous message sequence. (mapping above + 0031-0032; FIG. 1 is a block diagram of an intrusion detection system (IDS) 100 in accordance with various representative embodiments. IDS 100 is configured to receive a sequence of messages 102 of a communication network and generate an alarm 104 when an anomaly is detected in the sequence of messages. Each message includes data content and a message identifier. IDS 100 includes a message graph generator 106. Message graph generator 106 is configured to receive the sequence of messages 102 and generate therefrom graph data denoting a node for each message identifier in the sequence of messages and an edge for each pair of consecutive message identifiers in the sequence of messages, each edge linking two nodes. Message graph generator 106 also generates an input feature vector for each node in the graph data. The input feature vector denotes data content of messages that include the message identifier associated with the node and a pair count for each edge connected to the node. The pair count for each edge in the graph data denotes the number of times the associated pair of consecutive message identifiers occurs in the sequence of messages. IDS 100 also includes anomaly detector 108 configured to process the input feature vectors using a graph neural network (GNN) to produce first output feature vectors, and then classify the sequence of messages as containing an anomaly or not by processing the first output feature vectors through one or more first output layers. Alarm 104 is asserted when an anomaly is detected. When the communication is used in a vehicle or machine, the alarm may be used to alert the operator. The alarm may be used to initiate a mitigation process, such as entering a “safe mode” of operation. Optionally, IDS 100 may include anomaly classifier 110 configured to process the input feature vectors using a second graph neural network, based on the same graph data, to produce second output feature vectors, and then classify the anomaly. The anomaly may be classified by processing the second output feature vectors through one or more second output layers. The classifier need only be operated when an anomaly is detected by anomaly detector 108.) and controlling one or more functions of the vehicle based on the indication (0031; Alarm 104 is asserted when an anomaly is detected. When the communication is used in a vehicle or machine, the alarm may be used to alert the operator. The alarm may be used to initiate a mitigation process, such as entering a “safe mode” of operation.) Zhang does not explicitly teach and wherein each of the plurality of states in the model includes the observed signal values of the signal content of two or more correlated signals contained in the decoded training message sequences; In an analogous art Cohen teaches wherein each of the plurality of states in the model includes the observed signal values (vehicle data points) of the signal content of two or more correlated signals (current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like) contained in the decoded training message sequences;([0041] In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., probability of state of gear being reverse, neutral, and/or the like) based on recent historical values of the message (e.g., 10-20 different data points prior to the current value, a predetermined range of historical values, and/or the like). In some embodiments, first layer categorical variable model(s) 220 predicts a probability associated with a classification for a given categorical variable message 264 (e.g., 80% probability of state of gear being 5.sup.th gear) based on current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like), previously observed data points for the correlated messages (e.g., 10-20 different data points prior to the current value of the correlated messages, a predetermined range of historical values), and/or the like. In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., 0.1% probability of state of gear being reverse) based on any combination of recent data points on historical values as well as additional correlated messages. In various embodiments, a current or historical classification or value of a categorical variable message 264 corresponds to a vehicle operational parameter 282, where the classification or value is a current or historical classification or value of the vehicle operational parameter[0038] Returning to FIG. 2, in some embodiments, first layer continuous variable model(s) 210 predict the value of a given continuous variable message 262 (e.g., revolutions per minute/RPM) based on recent historical values of the continuous variable message (e.g., 10-20 different data points prior to the current value, a predetermined range of historical values, and/or the like). In some embodiments, first layer continuous variable model(s) 210 predicts a current value for a given continuous variable message 262 (e.g., RPM) based on current values of one or more correlated messages (e.g., acceleration pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like), previously observed data points for the correlated messages (e.g., 10-20 different data points prior to the current value of the correlated messages, a predetermined range of historical values, and/or the like), and/or the like. In some embodiments, first layer continuous variable model(s) 210 predict the value of a given continuous variable message 262 (e.g., RPM) based on any combination of recent data points on historical values as well as additional correlated messages. In various embodiments, a current or historical value of a continuous variable message 262 corresponds to a vehicle operational parameter 282, where the value is a current or historical value of the vehicle operational parameter.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Zhang] to include [classifying states based on correlated signals] as is taught by [Cohen]. The suggestion/motivation for doing so is to improve detection of vehicle malfunctions and cyber-attacks using machine learning [0002]. Regarding claim 4, Zhang in view of Cohen teach the method of claim 1, and is disclosed above, Zhang does not explicitly teach but Cohen teaches wherein each of the plurality of states (vehicle state classification) includes at least two values for each of the two or more correlated signals. (current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like) ([ [0041] In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., probability of state of gear being reverse, neutral, and/or the like) based on recent historical values of the message (e.g., 10-20 different data points prior to the current value, a predetermined range of historical values, and/or the like). In some embodiments, first layer categorical variable model(s) 220 predicts a probability associated with a classification for a given categorical variable message 264 (e.g., 80% probability of state of gear being 5.sup.th gear) based on current values of one or more correlated messages (e.g., pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like), previously observed data points for the correlated messages (e.g., 10-20 different data points prior to the current value of the correlated messages, a predetermined range of historical values), and/or the like. In some embodiments, first layer categorical variable model(s) 220 predict the classification of a given categorical variable message 264 (e.g., 0.1% probability of state of gear being reverse) based on any combination of recent data points on historical values as well as additional correlated messages. In various embodiments, a current or historical classification or value of a categorical variable message 264 corresponds to a vehicle operational parameter 282, where the classification or value is a current or historical classification or value of the vehicle operational parameter[0038] Returning to FIG. 2, in some embodiments, first layer continuous variable model(s) 210 predict the value of a given continuous variable message 262 (e.g., revolutions per minute/RPM) based on recent historical values of the continuous variable message (e.g., 10-20 different data points prior to the current value, a predetermined range of historical values, and/or the like). In some embodiments, first layer continuous variable model(s) 210 predicts a current value for a given continuous variable message 262 (e.g., RPM) based on current values of one or more correlated messages (e.g., acceleration pedal position, vehicle speed, torque of engine, gear, brakes, and/or the like), previously observed data points for the correlated messages (e.g., 10-20 different data points prior to the current value of the correlated messages, a predetermined range of historical values, and/or the like), and/or the like. In some embodiments, first layer continuous variable model(s) 210 predict the value of a given continuous variable message 262 (e.g., RPM) based on any combination of recent data points on historical values as well as additional correlated messages. In various embodiments, a current or historical value of a continuous variable message 262 corresponds to a vehicle operational parameter 282, where the value is a current or historical value of the vehicle operational parameter.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Zhang] to include [states based on correlated values] as is taught by [Cohen]. The suggestion/motivation for doing so is to improve detection of vehicle malfunctions and cyber-attacks using machine learning [0002]. Regarding claim 5, Zhang in view of Cohen teach the method of claim 4, in particular Zhang teaches wherein the trained model identifies probabilities (probability) of each of the state transitions between the respective states of the plurality of states. (single state transition from normal to anomaly) (See table II 0101-0103; When a new anomaly CAN message graph comes to the second-stage classifier, the openmax layer will follow METHOD 1, listed below, for attack type classification with potential unknown anomaly rejection. From METHOD 1, it can be seen that the openmax layer in general adapts the softmax function to open world recognition, which is realized by introducing an extra class c0 (line 8 of METHOD 1). Such a class is used to include all the anomaly CAN message graphs that are not quite similar to any existing attack type, and further infer a potential unknown class. Line 4 of METHOD 1 generates probabilities in which the new anomaly CAN message graph belongs to a selected number of top-ranked classes or attack types. Such probabilities are derived from the corresponding fitted Weibull distributions and used to revise the activation vector (line 5 of METHOD 1)) Regarding claim 6, Zhang in view of Cohen teach the method of claim 1, and is disclosed above, Zhang further teaches wherein outputting the value from the trained model comprises one of: calculating a distance between a first state of the plurality of states corresponding to a first message in the decoded IVN message sequence and a second state corresponding to a second message in the decoded IVN message sequence and calculating the value based on the distance; (Examiner notes element (1) of the limitation is part of a “one of” limitation and is not being elected) and calculating the value using a probability heuristic method. (0101-0103; Method 1: teaches the output is a probability see the formula) Regarding claim 7, Zhang in view of Cohen teach the method of claim 6, in particular Zhang teaches wherein the distance corresponds to a number of state transitions in the model required to traverse between the first state and the second state. (0108; interval distance; Based on the above two comparisons, a step change can be seen when two message sequences are 50 intervals apart, which cannot be observed in the first situation. In a short time-interval, i.e., when two consecutive message sequences are compared, the vehicle state does not change. In this case, message sequences vary within a certain range, which is reflected in FIG. 16 . On the other hand, two message sequences with 50 intervals apart will correspond to two different vehicle states. Since some ECUs, such as tire pressure sensors, will only get activated in some vehicle states, noticeable variations, such as the step change indicated by box 1702 in FIG. 17 , can happen. In addition, message sequence changes may further have intrinsic connections with some message content variations caused by vehicle state changes. Thus, it is desirable to collect data from as many driving scenarios and vehicle states as possible, which may have to come from different vehicles considering the limitations discussed above. Federated learning can then be applied for model training while protecting data privacy.) Regarding claim 8, Zhang in view of Cohen teach the method of claim 7, in particular Zhang teaches wherein the second state does not correspond to any message contained in the decoded (see mapping above) training message sequences (Figs 10-12; messages 1202 are not the training data see mapping above + 0074; 0104-110 which teaches the anomalies are within the data ran through the graph model, which uses CAN message learning which identifies transitions between states and therefore does not correspond to the training data) Regarding claim 10, Zhang in view of Cohen teach the anomaly detection system of claim 9, and is disclosed above, Zhang further teaches, further comprising a decoder configured to decode the training message sequences,(training data is provided from a collected dataset, and the collected data set is collected from the CAN message data of the vehicle) to obtain the decoded training message sequences. (0071; Decode block 904 extracts message IDs 906 and data content 908 from messages 902. The data content is encoded into the payload of a message. This encoding is defined by message protocol 910; [0072-0073] FIG. 11 is a flow chart of a method 1100 of training an anomaly classifier of an IDS, in accordance with various representative embodiments. At block 1102, a sequence of messages is retrieved from a training dataset. 0036; Alternatively, data contents—such as signals and counters—may be extracted from message payloads using known message protocols. For example, a 64-bit payload may include four 15-bit values denoting encoded wheel speeds and a 4-bit checksum. Protocol information may be provided in a *. DBC database file or other format and stored for use by the IDS computer. Furthermore, 0120; teaching CAN signal extraction, furthermore signals are a data structure used in in vehicle networks, (see also 0046; and background section 0002 teaching an in-vehicle network CAN used for messaging) Therefore we know now signals are a data structure used in IVNs and therefore when transmitted MUST have been encoded into the data format and therefore must have been decoded); claim 11; extracting the data content of an input feature vector from a message payload based on identified data block boundaries in the message payload; [0106] FIGS. 14 and 15 shows relevant CAN message content in a dataset collected while a vehicle accelerates from 0 to 30 mph (about 48 km/h) and then decelerates back to 0 mph. The target CAN ID is 254, which is related to vehicle speed information, and the dataset has 707 messages with this CAN ID. ) Regarding claims 9, 12-17 and 19-20, the claims inherit the same rejection as claims 1, 4-8 above for reciting similar limitations in the form of a system claim, (0030; Intrusion detection system See 0001; 0046; in-vehicle networks) and computing device (0028; device 0154; hardware processors) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDERRAHMEN H CHOUAT whose telephone number is (571)431-0695. The examiner can normally be reached on Mon-Fri from 9AM to 5PM PST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. Abderrahmen Chouat Examiner Art Unit 2451 /Chris Parry/Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Jul 21, 2025
Response Filed
Aug 29, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596793
SYSTEM AND METHOD FOR PATTERN-BASED DETECTION AND MITIGATION OF COMPROMISED CREDENTIALS
2y 5m to grant Granted Apr 07, 2026
Patent 12592919
RE-AUTHENTICATION KEY GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12593197
APPLICATION REQUIREMENTS FOR VEHICLE-TO-EVERYTHING APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12547911
CHARACTERIZING A COMPUTERIZED SYSTEM BASED ON CLUSTERS OF KEY PERFORMANCE INDICATORS
2y 5m to grant Granted Feb 10, 2026
Patent 12549643
PUSH NOTIFICATION DISTRIBUTION SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
77%
With Interview (+4.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 267 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month