Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
In the remarks filed on 11/19/2025. The applicant amended claims1-3, and 6-10 are amended. Claims 4-5 are cancelled. Claims 11 and 12 are added.
With respect to claim objections:
Applicant’ claim amendments and remarks filed on 11/19/2025 have been fully considered and overcame the claim objections as presented in the non-final office action filed 08/29/2025. Therefore, rejection have been withdrawn.
With respect to 35 U.S.C. §112(f):
Applicant’ claim amendments and remarks filed on 11/19/2025 have been fully considered and overcame 112(f) rejection as presented in the non-final office action filed 08/29/2025. Therefore, objections have been withdrawn.
With respect to 35 U.S.C. §112(b) rejections:
Applicant’ claim amendments and remarks filed on 11/19/2025 have been fully considered and overcame 112(b) rejection as presented in the non-final office action filed 08/29/2025. Therefore, rejection have been withdrawn.
With respect to 35 U.S.C. § 103 rejections:
Applicant's arguments filed on 11/19/2025 have been received and entered.
Applicant's arguments with respect to the newly amended independent claims, see Applicant Arguments 12-17, with respect to the rejection (s) of independent claims 1 and 9 have been fully considered.
Applicant argues that Green over Kroyzer (US 20160330225 A1) in view of Chand (US 20160357177 A1) fails to teach the features of amended claim limitations that has been formerly recited in claims 4 and 5, now incorporated into independent claim 1 and 9. Applicant further argues that Umemoto and Kaderábek do not cure the deficiencies. Examiner understands the applicant perspective, however, respectfully disagrees. Applicant asserts that applied references fails to disclose “predicting a future state of the control system by using a simulator that runs a simulation of the controller and control system” and Umemoto merely describes adjusting a parameter (i.e., valve opening), However, Umemoto discloses a simulation section (160) configured to perform a simulation of the control system including dynamic simulation of control parameter changes and estimation of a future abnormality of the equipment based on the simulation result, (see par [0006], [0064] [0068], [0075]). This disclosure constitutes use of simulator that runs a simulation of the controller and the control system satisfying the simulation requirement as recited in amend claim 1 under BRI.
Applicant further argues that Kaderábek merely teaches constructing a blacklist based on honeypot simulation results and does not cure the deficiencies of Umemoto. Kaderábek discloses simulating attack activity in a honeypot environment, generating predicted likelihood of future based on the simulation results, constructing a blacklist of network addresses predicted to be a source of future attacks, [0007]. The claim does not require that the blacklist be generated from the same simulation instance disclosed in Umemoto, nor does it require that the simulation has to be from specific domain. Under BRI, a blacklist generated from simulated system behavior satisfies the limitation. Moreover, the rejection relies on the combination of references. Umemoto for simulating a control system and predicting future anomalous states and Kaderábek constructing a blacklist based on predicted abnormal future states derived from simulation results makes it obvious to POSITA to combine these references to proactively prevent entry into predicted anomalous state.
Applicant also argues that the references do not disclose “defining the predicted register number related to the anomalous state of the control system and the range of the register value of the predicted register number, the range being the range of the register value within which the control system is predicted to enter the anomalous state”. Chand explicitly discloses monitored registers (i.e., configuration register 50, firmware control program) to be defined as acceptable ranges and evaluating its deviation. Umemoto identifies specific control parameters that when modified affect measured equipment parameters and estimates abnormality based on those predicted values. Under BRI, a “ predicted register number “ reads on a control parameter identified as affecting a measured operational parameter and the associated anomaly range reads on the threshold or acceptable multi-value range defined for that monitored parameter. Thus, the references collectively teach identifying specific monitored parameters (i.e., registers), defining allowable ranges, simulating parameter changes, and determining whether predicted values exceed abnormal thresholds.
Applicant further argues that the applied references fails to disclose “predicting the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the predicted register number is within the range”. The argument is not persuasive. Umemoto discloses modifying a control parameter in simulation, predicting the resulting operational parameter values, comparing predicted indicator values against a threshold defining normal versus abnormal conditions; and estimating future abnormality based on whether the predicted values cross that threshold, [0064,0068,0075]. This corresponds directly to changing a predicted register value (i.e., modifying a control parameter in simulation), monitoring the resulting simulated parameter values, and determining whether those values fall within or beyond an abnormal threshold (i.e., anomaly range). Under BRI, an “anomaly range” reasonably reads on threshold defined region that defines normal from abnormal conditions. Monitoring whether a predicted simulated value exceeds or fails within such threshold constitutes monitoring whether the value is within an anomaly range. Moreover, Chand also discloses Hardware configuration registers (e.g., register 50) , dynamic signatures composed of multiple time varying quantities, rules establishing multi value ranges for each quantity and detecting tampering when values fall outside established ranges , See [0012, 0018-0019, 0066]. Chand defines a specific register or monitored value, acceptable ranges for that register, and detecting based on whether the value falls within or outside the defined range. Thus, providing the structure of defining a register number and associated anomaly range, while Umemoto provides simulation-based monitoring of parameter changes against abnormal thresholds. It would have been obvious to POSITA to apply Chand’s register range anomaly definitions to the simulation based predictive framework of Umemoto to improve proactive anomaly detection.
Applicant also argues that there is no motivation to combine these four references, examiner respectfully disagrees. Kroyzer and Chand teach anomaly detection based on modeled operational ranges and register based rules. Umemoto teaches simulating candidate control methods to estimate future abnormality. Kaderábek teaches generating blacklist based on predicted likelihood derived from simulated environments. It would be obvious to POSITA to combine simulation based future state prediction with register-based anomaly range monitoring to proactively identify and blacklist configurations predicted to cause anomalous states to enhance system security and reducing false positives.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1,and 9 recite the multiple instances of “creating a black list,” including:
“creating a blacklist based on result of predicting,”, “the processor dynamically creates the blacklist corresponding to a combination…”, and “creates the blacklist based on result of the simulation run by the simulator”. The claims does not clearly distinguish whether these limitations refer to the same blacklist, separate blacklists created at different stages, or successive modification of a single blacklist. Further, the claim recites “storing the blacklist” prior to reciting that the processor “defines the predicted register number.. and the range” that are included in the blacklist, which creates ambiguity as to whether the blacklist is fully defined before storage or modified thereafter. It is unclear the steps how blacklist are created and how these black list are different structurally; examiner suggest applicant to clear the scope of the claim.
Claims 1 and 9 also recites “based on a result of the predicting”, “collected by the collecting”, “stored in the storing” “outputting a result of the determining”. The terms “the predicting”, “the collecting”, “the storing”, “the determining” refers to operations and do not identify a structural element. It is unclear what component performs the checking, and where the black list is stored, making the scope of claim unclear. Further the claims recite that “ the blacklist includes a predicted register number of a register value”. This phrasing is unclear because a register number identifies a register, not a register value and thus it is uncertain what is being predicted (i.e., a register number, a register value or both). Furthermore , the phrase “in case of register value of the predicted register number being changed to …state in future”, is unclear because it does not specify the condition under which occurs, the reference value which the change is measured or whether the change is actual or simulated. As a result, the scope of the claims is indefinite. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required.
Claim 11 and 12 recites “calculates an importance level based on (i) a level of impact caused when the control system enters the anomalous state and (ii) a transition time taken to enter the anomalous state, and assigns the importance level to the register value included in the blacklist” and “calculates an importance level based on (i) a level of impact caused when the control system enters the anomalous state and (ii) a transition time taken to enter the anomalous state, and assigns the importance level to the register value included in the blacklist” . These limitation fails to provide the boundary of what constitutes the “importance level” including how the importance level is represented (e.g., numeric, categories, ranking), what scale is used or what objective criteria define different levels. The phrase “level of impact” is a term of degree that is not defined in the claims and lacks objective boundaries. The claims do not specify what constitutes “impact”, what is being impacted (e.g., cost, safety, equipment), how “impact” is measured or what distinguishes different “levels” of impact. The claims do not specify the start point and end point of the interval used to compute “transition time” (e.g., from a register value is changed, from when the simulation begins, from when a deviation occurs, to when a threshold is first crossed, to when anomalous state is sustained. Furthermore, a “register value” is merely a value stored in register, and the claims do not make clear whether the importance level assigned to (i) a register number (II) a register number, anomaly range, predicted state, transition time or (iv) the blacklist as whole. As a result, the scope of the claims is indefinite. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, and 6-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kroyzer (US 20160330225 A1) in view of Chand (US 20160357177 A1) in further view of Umemoto (US 20210333768 A1) in further view of Kaderábek (US 20210120022 A1).
Regarding claim 1, An anomaly detection system that detects a future anomalous state of a control system (Kroyzer, systems, for resisting malicious code from tampering with or otherwise exploiting an industrial control system (e.g., a SCADA), [0033]), the anomaly detection system comprising: a processor; and a memory including a program that, when executed by the processor, causes the processor to execute operations (Kroyzer , the operations including
collecting register values of a plurality of register numbers from a controller that controls the control system (Kroyzer, An industrial control system 130, in FIG. 3, can include one or more of the following elements: [0029] (1) a supervisory computer system (e.g., SCADA 106) (i.e., a register value collector) , which gathers data on the process and sends commands to control the process, [0027] the data collected may include for example at least one of: data from sensors operating within the control system 104, tags (i.e., from SCADA 106, PLC 136, or DCS 110), SCADA processing data, IT data, operator data, log files (i.e., from operating systems, IT, and/or SCADA 106), network data or communication data, [0037] collecting data of the correct operational parameters from the at least one input device, the at least one input device is at least one of the industrial control system, a supervisory control and data acquisition (SCADA) system, a sensor, remote input/output (I/O) hardware, a virtual network and data logs. [0117-0118]) [Examiner interprets that tags (i.e., from SCADA 106, PLC 136, or DCS 110) as controller register numbers and their values as register values, and system collecting those parameters from control subsystem as limitation above].
predicting a future state of the control system (Kroyzer, it comprises a prediction engine 20 configured to predict the expected change to the operational parameters in response to the commands issued; accordingly, the industrial control system 410 is configured to alert an operator if the predicted response is not realized, [0061] the prediction engine is configured to use a mathematical model of the industrial process plant 412 to predict the effect on one or more operational parameters in response to operation of one or more control elements 14. For example, the prediction engine may determine that opening a relief valve of a storage tank for a brief interval, e.g., several seconds, will lower the internal pressure of the storage tank by a given amount, or by a given range, [0063] the prediction engine 20 is configured to undergo a learning procedure to gather prediction data, [0064] In the predicting step 210, the response detector 18 predicts, via the prediction engine 20, the effect on one or more operational parameters by a predetermined modification of an operational state of one or more one control devices. The modification may be small, such that its effect on an operational parameter does not negatively impact the operation of the industrial control plant 412, but large enough so that its effect on one or more operational parameters is both measurable and distinguished from fluctuations during normal operation. The predicted effect may be a discreet value, or a range of values, [0070] The predicting may be performed based on calculation of the effect the modification will have on the industrial process plant, [0090]) [Examiner interprets that prediction engine predicting the effect on operational parameters after a change (i.e., the future state) as limitation above];
creating a blacklist based on a result of the predicting (Kroyzer, The anomaly detection system can analyze data representing current operational parameters of the industrial control system with respect to said model and create an alarm responsively to when the analyzing indicates a deviation from said model that exceeds a predetermined threshold, [0014] The training may also include classifying the data deviation such that the system may interpret which deviations from the correct data are acceptable and which are not acceptable (i.e., Black list and white list), the anomaly detection system 100 may check the current operational parameter(s) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated. an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present. [0041-0043]) [Examiner interpret that model creating a range value that differentiate the operational parameters are anomaly or not (i.e., black list vs whitelist) based on training/learning the model as limitation above];
storing the blacklist (Kroyzer, FIG. 2, the anomaly detection system 100 can include data processing module 102, which can include a training module 114, an analysis module 116, and a data storage module 124, The data and/or the analysis may be stored in data storage module 124, [0041] the prediction engine records both the modification and information regarding the corresponding change in the operational parameters. The information includes the measured change in the operational parameter, and may also include information relating to the timing and duration of the change. The recorded information may be stored in a database, which is accessed by the prediction engine when compiling its prediction, [0067]) [Examiner interprets that storing data to analysis, the measured change in the operational parameters (i.e., the range of change that is considered as anomaly; blacklist) as limitation above];
determining whether the control system enters an anomalous state, by checking the register values collected by the collecting against the blacklist stored in the storing (Kroyzer, The anomaly detection system can analyze data representing current operational parameters of the industrial control system with respect to said model and create an alarm responsively to when the analyzing indicates a deviation from said model that exceeds a predetermined threshold, [0014] The training may also include classifying the data deviation such that the system may interpret which deviations from the correct data are acceptable and which are not acceptable (i.e., Black list and white list), the anomaly detection system 100 may check the current operational parameter(s) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated. an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present. [0041-0043] the response detector 18 compares the result of the monitoring step 230 to the prediction obtained in the prediction, [0073] In the determining step 250, the response detector 18 determines, using the results of the comparing step, whether or not an anomaly has occurred. If the results of the monitoring step deviate from the prediction by more than a predetermined threshold, the response detector determines that an anomaly has occurred. If they do not deviate more than a predetermined threshold, the response detector determines that that an anomaly has not occurred, [0074]) [Examiner interprets that system checking current parameters/correlations against the trained ranges/criteria and flagging anomalies when deviation beyond threshold occurs (i.e., the blacklisted region) as limitation above]; and
outputting a result of the determining (Kroyzer a communication function may be performed when the detected deviation is above or below a predefined threshold. For example, the communication function may include at least one of: creating an alarm (e.g., a visual or auditory alarm via alarm module 122), communicating data to at least one of a control system (e.g., to the SCADA 106 or the DCS 110) and an operator (e.g., to a system user via user interface 120 or to a user of the industrial control system via HMI 132), and recording the data (e.g., in data storage module 124) or the alarm, [0045] responding to a detected deviation from the prediction include. At least one of: taking corrective actions, alerting, alarming or performing system overrides, combinations, [0086] the communication function comprises at least one of generating a visual or auditory alarm, communicating data related to the deviation to the industrial control system or an operator, [0131]) [Examiner interprets that system outputting alarms, commands, corrective actions in response to determining the detected deviation (i.e., anomaly) as limitation above].
wherein the blacklist includes: a predicted register number of a register value and a range of the register value of the predicted register number, the predicted register number being a register number that is predicted, in a case of the register value of the predicted register number being changed, to cause the control system to enter the anomalous state in future, among the plurality of register numbers holding the register values collected by the processor, the range being a range within which the control system is predicted to enter the anomalous state, and (Kroyzer, The training module can be configured to analyze historical data of operational parameters of the industrial control system and to determine normal operating criteria for evaluating current operational parameters of the industrial control system based on the analysis of the historical data. The data analysis module can be configured to analyze data indicative of current operational parameters of the industrial control system with respect to the normal operating criteria and to detect the presence of an anomaly based on a deviation determined responsively to the analysis of the current data, [0014] the response detector 18 predicts, via the prediction engine 20, the effect on one or more operational parameters by a predetermined modification of an operational state of one or more one control devices. The modification may be small, such that its effect on an operational parameter does not negatively impact the operation of the industrial control plant 412, but large enough so that its effect on one or more operational parameters is both measurable and distinguished from fluctuations during normal operation. The predicted effect may be a discreet value, or a range of values. The predicted effect may be a discreet value, or a range of values, [0070] the response detector 18 compares the result of the monitoring step 230 to the prediction obtained in the prediction step 210, [0073] In the determining step 250, the response detector 18 determines, using the results of the comparing step, whether or not an anomaly has occurred. If the results of the monitoring step deviate from the prediction by more than a predetermined threshold, the response detector determines that an anomaly has occurred. If they do not deviate more than a predetermined threshold, the response detector determines that that an anomaly has not occurred, [0074]) [Examiner interprets that prediction engine forecasting expected parameter change ranges, deviation beyond threshold that causes the anomaly detection as limitation above];
the processor dynamically creates the blacklist corresponding to a combination of the register values of the plurality of register numbers collected by the processor (Kroyzer, the method may include a feedback system, such that the data of the current operational parameters may be sent to the training of step 8 so that the current data can be added to the library of the training data. An offline feedback system may be included between step 8 and step 6. This feedback system may be used in order to take the “trained” data and use it as part of the overall data analysis, [0044] The response detector 18 may carry out the method 200 at regular or random intervals. In addition, it may vary the modifying step 220 (and thus the prediction step 210) during different iterations of the method 200. In this way, an intruder cannot easily mimic the operation of the response detector 18, [0078] Non-production operating modes (i.e., non-anomalous or special) may include those attending maintenance operations, shutdown conditions, start-up conditions, and testing conditions, [0106]) [Examiner interprets that system using feedback loops and learning procedures update the model dynamically which effectively updates the range (i.e., the black list) as limitation above].
predicts the future state of the control system by using a simulator that runs a simulation of the controller and the control system, and the blacklist creator creates the blacklist based on a result of the simulation run by the simulator (Kroyzer, it comprises a prediction engine 20 configured to predict the expected change to the operational parameters in response to the commands issued; accordingly, the industrial control system 410 is configured to alert an operator if the predicted response is not realized, [0061] the prediction engine is configured to use a mathematical model of the industrial process plant 412 to predict the effect on one or more operational parameters in response to operation of one or more control elements 14. For example, the prediction engine may determine that opening a relief valve of a storage tank for a brief interval, e.g., several seconds, will lower the internal pressure of the storage tank by a given amount, or by a given range, [0063] the prediction engine 20 is configured to undergo a learning procedure to gather prediction data, [0064] In the predicting step 210, the response detector 18 predicts, via the prediction engine 20, the effect on one or more operational parameters by a predetermined modification of an operational state of one or more one control devices. The modification may be small, such that its effect on an operational parameter does not negatively impact the operation of the industrial control plant 412, but large enough so that its effect on one or more operational parameters is both measurable and distinguished from fluctuations during normal operation. The predicted effect may be a discreet value, or a range of values, [0070] The predicting may be performed based on calculation of the effect the modification will have on the industrial process plant, [0090]);
defines a register number related to an anomalous state of the control system and an anomaly range of a register value of the register number, the anomaly range indicating a range of the register value within which the control system enters the anomalous state (Kroyzer, the data collected may include for example at least one of: data from sensors operating within the control system 104, tags (i.e., from SCADA 106, PLC 136, or DCS 110), SCADA processing data, IT data, operator data, log files (i.e., from operating systems, IT, and/or SCADA 106), network data or communication data, [0037] the anomaly detection system 100 may check the current operational parameter(s) (i.e., a register number) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated, [0042] an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present, [0043]) control elements 14 include speed (for example of a conveyor belt) and/or state (e.g., on/off, revolutions per minute (RPM), etc.) of a control element 14, [0055]) [Examiner interprets that an actual PLC/SCADA/ DCS tags which are memory addressable values as registers and detection system using operational parameters ranges and decision thresholds repeatedly for detecting anomalies as limitation above];
predicts the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the register number defined by the anomalous state definer is within the anomaly range (Kroyzer, Fig 6, In the predicting step 210, the response detector 18 predicts, via the prediction engine 20, the effect on one or more operational parameters by a predetermined modification of an operational state of one or more one control devices. The modification may be small, such that its effect on an operational parameter does not negatively impact the operation of the industrial control plant 412, but large enough so that its effect on one or more operational parameters is both measurable and distinguished from fluctuations during normal operation. The predicted effect may be a discreet value, or a range of values, In the modifying step 220, the response detector 18 performs the modification, In the monitoring step 230, the response detector 18 monitors information provided by the sensors 16. The monitoring may be performed during and/or after the modification. In the comparing step 240, the response detector 18 compares the result of the monitoring step 230 to the prediction obtained in the prediction step 210. In the determining step 250, the response detector 18 determines, using the results of the comparing step, whether or not an anomaly has occurred. If the results of the monitoring step deviate from the prediction by more than a predetermined threshold, the response detector determines that an anomaly has occurred, [0070-0074]) [Examiner interprets that system predicting effect of predetermined modification on one or more operational parameters, then comparing to expected or threshold as limitation above]
Although Kroyzer teaches creating and updating range or threshold for operational parameters of SCDA data such as tags (i.e., from SCADA 106, PLC 136, or DCS 110), and comparing the current parameters to predetermined range to threshold to detect the anomalous state, Kroyzer does not explicitly teach:
collecting register values of a plurality of register numbers from a controller that controls the control system; predicting register number and its range; generating backlist based on prediction and storing the blacklist created; checking the register values collected by the register value collector against the blacklist stored; creating a blacklist based on combination of multiple register values; predicts the future state of the control system by using a simulator that runs a simulation of the controller and the control system, and the blacklist creator creates the blacklist based on a result of the simulation run by the simulator; defines a register number related to an anomalous state of the control system and an anomaly range of a register value of the register number, the anomaly range indicating a range of the register value within which the control system; predicts the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the register number defined by the anomalous state definer is within the anomaly range
However, Chand teaches:
collecting register values of a plurality of register numbers from a controller that controls the control system (Chand, A hardware configuration register 50 (implemented in volatile or nonvolatile memory 45 and/or as physical switch positions) may hold settings for controlling the operation of the control device 16 and may additionally provide manufacturing data about the control device 16 including, for example, a serial number, module function type, manufacturer name, manufacture date, …provide for a read-only memory including an encrypted certification code embedded by the manufacturer indicating authenticity of the hardware. The hardware configuration registers may further provide a storage location for output data from one or more diagnostic programs implemented by the operating system 48, for example, those that indicate memory or other faults, instruction execution speed, memory capacity or checksum results. In one embodiment, the diagnostic program outputs CPU utilization, free memory, and stack depth. The diagnostic program may also monitor network communication including port traffic over a predetermined interval and/or change in average port traffic such as may indicate a denial-of-service type attack, [0063]t he operating thumbprint 70 for each mode 72 of the thumbprint table 62 designates a specific set of thumbprint source data 74, for example, the control program 46, the firmware operating system 48, the configuration register 50, and environmental data held in various components of the control device 16 including the wire connection states of the connection management circuit 40, its address and/or location in the factory environment (for example held in communication or memory modules), operating temperature and the like from distributed internal sensors. In one example mode 72, the entire data set from each of the sources is reduced to a digest, [0066]) [Examiner interprets that collecting register data (hardware configuration registers, I/O tables) from control device’s processor and memory as collecting register values of a plurality of register numbers from a controller that controls the control system];
predicting register number and its range (Chand, The security program executes to receive a dynamic signature from a given control device through the network port, decrypt the dynamic signature, analyze the dynamic signature against rules establishing a multi-value range of acceptable dynamic signature values, and provide an output indicating whether the received dynamic signature is outside the multi-value range of acceptable dynamic signature values, [0012] The dynamic signature may include multiple time varying quantities wherein the rules establish multi-value ranges for each quantity, [0018] The multi-value ranges may vary as a function of other varying quantities, [0020] the rules of the dynamic stored thumbprints 100′ may be allowed to evolve within certain ranges so as to eliminate false positives caused by natural evolution of the state of the control system by using historical data to create new training sets that are used to constantly update the dynamic stored thumbprints 100′ [0113] The implicit rules of the dynamic stored thumbprints 100′ may also be randomly perturbed at the range thresholds to change the precise thresholds at which a response script of process block 154 is invoked. This randomization can help defeat “probing” of the dynamic stored thumbprints 100′, for example, on a separate industrial control system 10, where the probing is used to collect information to defeat other industrial control systems 10. The randomization may be performed, for example, by randomly selecting among different elements of a teaching set to provide slightly different teaching rules generated by a machine learning system 201, or by randomly adjusting the thresholds of ranges of rules used to evaluate dynamic stored thumbprint 100′ by minor amounts that still ensure that the function of the ranges to test for out-of-range conditions are still substantially met, [0114]) [Examiner interprets that system mapping each thumbprint sub value to specific source (i.e., hardware register/I/o point) and applying rules to establish that register’s valid multivalued range, updating overtime via historical analysis or machine learning as well as the system determining allowable ranges for each register in advance of anomalies as predicting register number and its range];
generating backlist based on prediction and storing the blacklist created (Chand, The thumbprint map 110 may generally identify each of the sub-thumbprints 78 by the function 112 of the source data 74 (for example: operating system 48, control program 46, hardware registers 50) and will give a weight 114 indicating the significance of a possible mismatch between stored thumbprint 100 and received thumbprints 70 or sub-thumbprint 78. The thumbprint map 110 may also provide a response script 118 indicating possible responses to a detected mismatch between the operating thumbprint 70 and the stored thumbprint 100, [0074] A white list may be established indicating, for example, changes or change combinations that are generally benign, for example, expected patterns of changes in the hardware registers 50 may be mapped to low significance level 166., [0099] mismatches caused by inauthentic control programs 46 or operating systems 48, that also match no previous thumbprint 108, that occur during unscheduled times, or that are caused by wire-off signals for critical functions may be given a high significance. Just as a white list may be established, a blacklist of configuration changes that are suspected, or have been predetermined to suggest tampering, may create a high significance level 166. Changes that are individually benign or low significant but where the changes occur during in an environment of other high significance levels 166 or changes associated with a predetermined pattern of mismatches in other similar control devices 16 may also be promoted to a high significance level 166, [0100]) [Examiner interprets that generating the blacklist entries when a register change is predicted to be anomalous or indicative of tampering and storing it for ongoing use as generating backlist based on prediction and storing the blacklist created];
Checking the register value of the predicted register number against the range defined in stored blacklist (Chand, Just as a white list may be established, a blacklist of configuration changes that are suspected, or have been predetermined to suggest tampering, may create a high significance level 166. Changes that are individually benign or low significant but where the changes occur during in an environment of other high significance levels 166 or changes associated with a predetermined pattern of mismatches in other similar control devices 16 may also be promoted to a high significance level 166, [0100] This dynamic operating thumbprint 70′ cannot be easily compared against a static stored thumbprint but may nevertheless be compared against rules that, for example, establish ranges of values within which the operating thumbprint 70′ or the underlying data should vary, or correlations between values of the underlying data that can be used to detect a deviation from the normal pattern and excursions of these dynamic values. In this case, the stored thumbprint 100 described above may be replaced by more sophisticated dynamic signatures to otherwise provide the detection of mismatches used as has been described above. Referring now to FIG. 12, one method of implementing a dynamic stored thumbprint 100 makes use of a machine learning system 201 or the like. This machine learning system 201 may be trained, as is understood in this art, using a teaching set 205 of normal dynamic operating thumbprints 70′ together with an intentional corruption of those normal dynamic thumbprints 70′ or intentionally manufactured thumbprints implementing hypothetical tampering scenarios. After the machine learning system 201 is trained using the teaching set 205, it then receives the actual dynamic thumbprints 70′ to produce an output 203 that may be used by decision block 148 of FIG. 5, [0011]) [Examiner interprets that system comparing specific register values (i.e., each thumbprint) from dynamic thumbprints against stored thumbprints (i.e., the whitelist/blacklist) as Checking the register value of the predicted register number against the range defined in stored blacklist].
creating a blacklist based on combination of multiple register values (Chand, a set of sophisticated rules that may recognize correlations or interrelations among different variables, [0021] a significance matrix 182 may be developed to map multiple conditions 184 to particular significance levels 166. Thus, for example, low significance (e.g., 0) may be mapped to conditions such as mismatched control program 46 that is nevertheless indicated to be authentic or occurring during a scheduled maintenance upgrades or a sub-thumbprint 78 that matches a previous thumbprint 108. Similarly, a wire loss indicated to be on a low importance function may garner a low significance level 166. A white list may be established indicating, for example, changes or change combinations that are generally benign, for example, expected patterns of changes in the hardware registers 50 may be mapped to low significance level 166. Changes that occur during a low alert status of the system may be given a low significance level 166. A low alert status may result from no or low numbers of mismatches or mismatches having low significance levels 166 at different control devices 16 or that occur on hardware that is redundant and thus can be readily mitigated, or when the occurrence of the mismatch has been acknowledgment by the contact individual with an indication that a high significance is not warranted, or should be overridden. In addition, particular input or output points identified to be important or leading indicators of a critical failure (or indicative of proper operations) may be received as inputs for the purpose of establishing an importance of other errors, [0099] Just as a white list may be established, a blacklist of configuration changes that are suspected, or have been predetermined to suggest tampering, may create a high significance level 166. Changes that are individually benign or low significant but where the changes occur during in an environment of other high significance levels 166 or changes associated with a predetermined pattern of mismatches in other similar control devices 16 may also be promoted to a high significance level 166, [0100]) [Examiner interprets that system considering multiple variables/registers together and if their combined state matches a suspicious pattern, it evaluating the significance level and blacklisting the condition as creating a blacklist based on combination of multiple register values];
Defines a register number related to an anomalous state of the control system and an anomaly range of a register value of the register number, the anomaly range indicating a range of the register value within which the control system (Chand, The security program executes to receive a dynamic signature from a given control device through the network port, decrypt the dynamic signature, analyze the dynamic signature against rules establishing a multi-value range of acceptable dynamic signature values, and provide an output indicating whether the received dynamic signature is outside the multi-value range of acceptable dynamic signature values, [0012] The dynamic signature may include multiple time varying quantities wherein the rules establish multi-value ranges for each quantity, detect tampering from dynamic variables which do not conform to a static thumbprint through the use of ranges encompassing multiple values of the dynamic signature, [0018-0019] the operating thumbprint 70 for each mode 72 of the thumbprint table 62 designates a specific set of thumbprint source data 74, for example, the control program 46, the firmware operating system 48, the configuration register 50, and environmental data held in various components of the control device 16 including the wire connection states of the connection management circuit 40, its address and/or location in the factory environment (for example held in communication or memory modules), operating temperature and the like from distributed internal sensors, [0066] This digital operating thumbprint 70 is then transmitted to the remote security-monitoring device where it is compared with a corresponding stored thumbprint to establish within a reasonable probability according to the digest scheme that the source data 74 of the control device 16 has not been modified or tampered with, [0067]) [Examiner interprets that the thumbprint table 62 specifying which register (i.e., configuration register 50) are to me monitored and defining rules /ranges for each monitored value as limitation above] The rationale for motivation applies as claim 1.
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Kroyzer to include a concept of collecting register values of a plurality of register numbers from a controller that controls the control system; predicting register number and its range; generating backlist based on prediction and storing the blacklist created; checking the register values collected by the register value collector against the blacklist stored; creating a blacklist based on combination of multiple register values; Defines a register number related to an anomalous state of the control system and an anomaly range of a register value of the register number, the anomaly range indicating a range of the register value within which the control system as taught by Chand for the purpose of monitoring possible tampering that may only be evident in dynamically changing patterns of operation of the industrial control system [Chand:0007] and detecting tampering from dynamic variables which do not conform to a static thumbprint through the use of ranges encompassing multiple values of the dynamic signature [Chand:0019].
Kroyzer and Chand does not explicitly teach:
predicts the future state of the control system by using a simulator that runs a simulation of the controller and the control system, and the blacklist creator creates the blacklist based on a result of the simulation run by the simulator; predicts the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the register number defined by the anomalous state definer is within the anomaly range
However, Umemoto teaches:
predicts the future state of the control system by using a simulator that runs a simulation of the controller and the control system (Umemoto, The control support apparatus include a simulation section for simulating a future state of the equipment when controlling the equipment by each of a plurality of candidate control methods according to an abnormality estimation result of the equipment…a simulation abnormality estimation section for estimating a future abnormality of the equipment based on a future state of the equipment for each of the plurality of candidate control methods, [0006] the simulation section 160 may also perform a simulation including a simulation model for simulating the operations of the control apparatus 110. For example, when the control apparatus 110 performs PID control or the like, the simulation section 160 may simulate the process of the control apparatus 110 changing the set value of the control parameter of the equipment 10 over time by PID control or the like, after the control support apparatus 120 sets the control parameter in the control apparatus 110, [0074]);
predicts the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the register number defined by the anomalous state definer is within the anomaly range (Umemoto, The equipment abnormality estimation section 140 (i.e., the anomalous state definer) estimates that the equipment 10 is normal when the soundness indicator value is greater than a threshold value indicating the boundary between normal and abnormal, and estimate that the equipment 10 is abnormal when the soundness indicator value is less than or equal to the threshold value (i.e., monitoring within the anomaly range). Herein, the equipment abnormality estimation section 140 may use different values for the threshold used to determine the opportunity to generate a plurality of candidate control methods and run a simulation, and the threshold used to warn the operator or the like of the abnormality of the equipment 10 by an alarm or other means, [0064] In S410, the candidate generation section 150 is configured to generate candidate control methods that normalize at least one factor parameter detected by the factor detection section 145, and store them in the candidate DB 155. For example, the candidate generation section 150 may have a relationship in advance between each of the plurality of control parameters and at least one parameter whose measurement value can be changed by changing the control parameter (i.e., the predicted register number). The candidate generation section 150 uses this relationship to identify at least one control parameter that can be used to adjust the at least one factor parameter detected by the factor detection section 145. Then, the candidate generation section 150 is configured to generate one or more candidate control methods that modify the identified control parameter, [0068] In S430, the simulation abnormality estimation section 165 is configured to estimate the future abnormality of the equipment 10 based on the simulation result by the simulation section 160 (i.e., future state predictor) for each of the plurality of candidate control methods, that is, the measurement data of the equipment 10 assumed in the future calculated by the simulation. when the simulation section 160 performs a dynamic simulation, the simulation abnormality estimation section 165 may estimate the indicator value of the abnormality of the equipment 10 at each time point in the future (i.e., the future state), [0075]) [Examiner interprets that when a control parameters (i.e., register values) changes the value in the simulation and monitoring resulting signals/indicators against a predefined abnormality threshold as limitation above].
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Kroyzer and Chand to include a concept predicts the future state of the control system by using a simulator that runs a simulation of the controller and the control system; predicts the future state of the control system in the simulation by monitoring whether, when the register value of the predicted register number is changed, the register value of the register number defined by the anomalous state definer is within the anomaly range as taught by Umemoto for the purpose of simulating a future state of the equipment when controlling the equipment by each of a plurality of candidate control methods according to an abnormality estimation result of the equipment and estimating a future abnormality of the equipment based on a future state of the equipment for each of the plurality of candidate control methods, [Umemoto: 0006].
Kroyzer, Chand, and Umemoto does not explicitly teach:
the blacklist creator creates the blacklist based on a result of the simulation run by the simulator (Kaderábek, a method of generating a network address blacklist from data relating to attacks on honeypots. Attack data is collected in honeypots, including network address of attacks and time of attacks, and sent to a network security server. The network security server analyzes the attack data to generate a predicted likelihood of future attacks from network addresses in the activity data, and a network address blacklist is constructed including network addresses predicted likely to be a source of a future attack, [0007] one or more honeypots may be emulated on a computerized device or within a virtual machine or virtual machines. The honeypots in some examples will be left relatively unprotected by a firewall or other protection, so that unauthorized attempts by other computers to access the honeypots can be captured and analyzed. The honeypots are configured to appear to public network devices as real, operational computerized devices, [0027]) [Examiner interprets that system using virtual honey pot environment to simulate the attack data to generate a predicted likelihood of future attacks from network addresses in the activity data and generating blacklist from data relating to attacks on honeypots as the blacklist creator creates the blacklist based on a result of the simulation run by the simulator].
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Kroyzer, Chand, and Umemoto to include a concept of the blacklist creator creates the blacklist based on a result of the simulation run by the simulator as taught by Kaderábek for the purpose of analyzing the attack data to generate a predicted likelihood of future attacks from network addresses in the activity data, and a constructing network address blacklist including network addresses predicted likely to be a source of a future attack [Kaderábek:0007].
Regarding claim 2, Kroyzer, Chand, Umemoto and Kaderabek further teaches anomaly detection system of claim 1, wherein the processor creates the blacklist based on a result of a check of the combination of the register values of the plurality of register numbers against a combination of register values of a plurality of register numbers included in a blacklist that is previously created (Kroyzer, the anomaly detection system 100 may check the current operational parameter(s) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated, [0042] an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present, [0043] the method may include a feedback system, such that the data of the current operational parameters may be sent to the training of step 8 so that the current data can be added to the library of the training data. An offline feedback system may be included between step 8 and step 6. This feedback system may be used in order to take the “trained” data and use it as part of the overall data analysis, [0044])
Although Kroyzer teaches comparing correlations/combination of multiple of operation parameters to previously stored non anomalous combinations in its model , if mismatched, it identifies anomalies state and its ongoing feedback to update the criteria/threshold or range, However, Kroyzer does not explicitly appear to teach:
Comparing the combinations of register values of plurality of register numbers against combinations stored in blacklist
However, Chand teaches:
Comparing the combinations of register values of plurality of register numbers against combinations stored in blacklist (Chand, a set of sophisticated rules that may recognize correlations or interrelations among different variables, [0021] The populated security table 92 may also provide, for each signature mode 72, thumbprint data 98 including a stored thumbprint 100 for that signature mode 72, previous valid thumbprints 108, and a thumbprint map 110., The thumbprint map 110 may generally identify each of the sub-thumbprints 78 by the function 112 of the source data 74 (for example: operating system 48, control program 46, hardware registers 50) and will give a weight 114 indicating the significance of a possible mismatch between stored thumbprint 100 and received thumbprints 70 or sub-thumbprint 78, [0073-0074] Just as a white list may be established, a blacklist of configuration changes that are suspected, or have been predetermined to suggest tampering, may create a high significance level 166. Changes that are individually benign or low significant but where the changes occur during in an environment of other high significance levels 166 or changes associated with a predetermined pattern of mismatches in other similar control devices 16 may also be promoted to a high significance level 166, [0100] the rules of the dynamic stored thumbprints 100′ may be allowed to evolve within certain ranges so as to eliminate false positives caused by natural evolution of the state of the control system. This evolution may be provided, for example, by using historical data to create new training sets that are used to constantly update the dynamic stored thumbprints 100′, [0113]) [Examiner interprets that system using dynamic operating thumbprints that maintains the mapping of multiple register values and detecting deviation by comparing current multi register combination to stored thumbprint data and blacklists in security table]; The rationale for motivation applies as claim 1.
Regarding claim 3, Kroyzer, Chand, Umemoto and Kaderabek further teaches anomaly detection system of 1, wherein the processor determines whether the control system enters the anomalous state, by checking the register value of the predicted register number against the range of the register value of the predicted register number defined in the blacklist stored in the storing (Kroyzer, the anomaly detection system 100 may check the current operational parameter(s) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated, [0042] an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present, [0043] the response detector 18 compares the result of the monitoring step 230 to the prediction obtained in the prediction step 210, [0073] In the determining step 250, the response detector 18 determines, using the results of the comparing step, whether or not an anomaly has occurred. If the results of the monitoring step deviate from the prediction by more than a predetermined threshold, the response detector determines that an anomaly has occurred. If they do not deviate more than a predetermined threshold, the response detector determines that that an anomaly has not occurred, [0074])
Although Kroyzer teaches the general anomaly determination by checking the current operational parameters to defined range that the parameter should fall, anything outside the range as anomaly, Kroyzer does not explicitly appear to teach:
Checking the register value of the predicted register number against the range defined in stored blacklist
However, Chand teaches:
Checking the register value of the predicted register number against the range defined in stored blacklist (Chand, Just as a white list may be established, a blacklist of configuration changes that are suspected, or have been predetermined to suggest tampering, may create a high significance level 166. Changes that are individually benign or low significant but where the changes occur during in an environment of other high significance levels 166 or changes associated with a predetermined pattern of mismatches in other similar control devices 16 may also be promoted to a high significance level 166, [0100] This dynamic operating thumbprint 70′ cannot be easily compared against a static stored thumbprint but may nevertheless be compared against rules that, for example, establish ranges of values within which the operating thumbprint 70′ or the underlying data should vary, or correlations between values of the underlying data that can be used to detect a deviation from the normal pattern and excursions of these dynamic values. In this case, the stored thumbprint 100 described above may be replaced by more sophisticated dynamic signatures to otherwise provide the detection of mismatches used as has been described above. Referring now to FIG. 12, one method of implementing a dynamic stored thumbprint 100 makes use of a machine learning system 201 or the like. This machine learning system 201 may be trained, as is understood in this art, using a teaching set 205 of normal dynamic operating thumbprints 70′ together with an intentional corruption of those normal dynamic thumbprints 70′ or intentionally manufactured thumbprints implementing hypothetical tampering scenarios. After the machine learning system 201 is trained using the teaching set 205, it then receives the actual dynamic thumbprints 70′ to produce an output 203 that may be used by decision block 148 of FIG. 5, [0011]) [Examiner interprets that system comparing specific register values (i.e., each thumbprint) from dynamic thumbprints against stored thumbprints (i.e., the whitelist/blacklist) as Checking the register value of the predicted register number against the range defined in stored blacklist]. The rationale for motivation applies as claim 1.
Regarding claim 6, Kroyzer, Chand, Umemoto, and Kaderábek further teaches anomaly detection system of claim 1 wherein the predicted register number is highly correlated with the register number included in the blacklist and related to the anomalous state of the control system (Kroyzer, the anomaly detection system 100 may check the current operational parameter(s) (which may be the same parameters used to form the training data or different from the training data parameters but related in some way to the training data parameters), or the correlation of at least two current operational parameters, for any potential deviation from the training data that would indicate an abnormal or incorrect operation of the industrial control system 104. Such a deviation may be detected, if a portion of the industrial control system has been taken over by an attacker or otherwise manipulated, [0042] an operational parameter may fluctuate within a given range during normal operation, which range may be defined by analysis of historical data during said training. Values outside of the range in the training data would suggest an anomaly. In another example, comparison of two operational parameters, such as the ratio of the two parameters, which ratio may fluctuate within a given range during normal operation, may be used to determine if an anomaly is present, [0043])
Although, Kroyzer teaches system analyzes correlation of between parameters as such if one parameter is corelated to another that is related to anomalous condition, it uses the it for prediction, Kroyzer does not explicitly teach:
Correlation between predicted register numbers and anomaly defined registered numbers
However, Chand teaches:
Correlation between predicted register numbers and anomaly defined registered numbers (Chand, “dynamic” data, for example, current I/O data from I/O table 42 which changes rapidly with operation of the control device 16, network data from the network interface 55 including port numbers, packet counts, and the like as well as actual received packets, and processor data from the processor 44, for example, processor utilization percentage, processor fault flags and the like. Again this data may be linked with a timestamp 79, a digital signature 80, a device identification number 71, and/or a changing random code 83 to provide security in the transmission of a dynamic operating thumbprint 70′, [0110] This dynamic operating thumbprint 70′ cannot be easily compared against a static stored thumbprint but may nevertheless be compared against rules that, for example, establish ranges of values within which the operating thumbprint 70′ or the underlying data should vary, or correlations between values of the underlying data that can be used to detect a deviation from the normal pattern and excursions of these dynamic values. In this case, the stored thumbprint 100 described above may be replaced by more sophisticated dynamic signatures to otherwise provide the detection of mismatches used as has been described above, [0111] At times, the rules of the dynamic stored thumbprints 100′ may be allowed to evolve within certain ranges so as to eliminate false positives caused by natural evolution of the state of the control system. This evolution may be provided, for example, by using historical data to create new training sets that are used to constantly update the dynamic stored thumbprints 100′, [0113]) [Examiner interprets that system identifying individual registers (via i/o table and hardware configuration registers) during dynamic operating thumbprint process, predicting their future values and ranges, cross-referencing the predicted registers to stored registers in security table 92, determining statistical correlations between the register values based on historical operational data, identifying which registers influence each other states, and using the correlation for anomaly detection as Correlation between predicted register numbers and anomaly defined registered numbers] The rationale for motivation applies as claim 1.
Regarding claim 7, Kroyzer, Chand, Umemoto, and Kaderábek further teaches anomaly detection system of claim 1 wherein the processor calculates, in the simulation, a time taken for the control system to enter the anomalous state (Umemoto, he equipment abnormality estimation section 140 may also estimate the abnormality of the equipment 10 using other conditions, such as a decreasing trend in the soundness indicator. This allows the equipment abnormality estimation section 140 to run a pre-simulation in a situation where an abnormality may occur in the future, and to switch the control method of the equipment 10, [0065] In S430, the simulation abnormality estimation section 165 is configured to estimate the future abnormality of the equipment 10 based on the simulation result by the simulation section 160 for each of the plurality of candidate control methods, that is, the measurement data of the equipment 10 assumed in the future calculated by the simulation. The simulation abnormality estimation section 165 may estimate the future abnormality of the equipment 10 using the same method as the equipment abnormality estimation section 140. The simulation abnormality estimation section 165 according to the present embodiment is configured to estimate the future abnormality of the equipment 10 using the estimation model stored in the estimation model DB 135. Herein, when the simulation section 160 performs a dynamic simulation, the simulation abnormality estimation section 165 may estimate the indicator value of the abnormality of the equipment 10 at each time point in the future, [0075] The candidate selection section 175 may select a plurality of candidate control methods with a higher priority for those with a higher soundness indicator value. Also, if the simulation section 160 is configured to perform a dynamic simulation, the candidate selection section 175 may select, with a higher priority, the one among the plurality of candidate control methods for which the soundness indicator value becomes greater than the threshold value earlier, [0076] simulation section 160 is configured to calculate, as the future state of the equipment 10, that the production volume of the equipment 10 is 200 per day in case 1, 50 per day in case 2, and 100 per day in case 3, and that the equipment maintenance period of the equipment 10 is 4 days in case 1, 40 days in case 2, and 15 days in case 3, by a simulation, [0086]) [Examiner interprets that system having the future indicator overtime and fixed normal/abnormal threshold , prioritizing options based on when indicator crosses a threshold (earlier/later) as limitation above].
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Kroyzer and Chand to include a concept of calculating, in the simulation, a time taken for the control system to enter the anomalous state as taught by Umemoto for the purpose of simulating a future state of the equipment when controlling the equipment by each of a plurality of candidate control methods according to an abnormality estimation result of the equipment and estimating a future abnormality of the equipment based on a future state of the equipment for each of the plurality of candidate control methods, [Umemoto: 0006] and performing a dynamic simulation, by selecting , with a higher priority, the one among the plurality of candidate control methods for which the soundness indicator value becomes greater than the threshold value earlier [Umemoto: 0086].
Regarding claim 8, Kroyzer, Chand, Umemoto, and Kaderábek further teaches the anomaly detection system of claim 1 wherein the processor defines the register number related to the anomalous state of the control system and the anomaly range, for each type of anomalous state of the control system (Kroyzer, The predetermined anomaly may be unauthorized access of the industrial control system by a third party. The third party may operate control devices of the industrial process plant under abnormal conditions, and send information to the industrial control system simulating measurements of operational parameters operating under normal condition, [0096] the system may be manually placed in a mode where the anomaly detections are automatically rejected when a special operating or non-operating mode is implemented… (i.e., mode aware non production ranges) [0104] the anomaly detection system or module detects a deviation when a component in a control network of the industrial control system has been taken over by an attacker or has been changed by a user without permission, the anomaly detection system or module comprises a device-based intrusion detection system, [0120-0121] the deviation is due to at least one of spoofing a master, spoofing a remote terminal unit, and denial of service, [0125] the controlling comprises indicating an anomaly when a difference between the compared values is greater than a predefined threshold. the controlling comprises taking corrective action in response to the indicated anomaly, [0051-0052]) [Examiner interprets that system having different attack types such as spoofing master , DoS, unauthorized access, distinct criteria/ thresholds (ranges) and different outputs with different anomaly classes and modes as limitation above]
Although Kroyzer teaches creating and updating range or threshold for operational parameters of SCDA data such as tags (i.e., from SCADA 106, PLC 136, or DCS 110), and comparing the current parameters to predetermined range to threshold to detect the anomalous state, Kroyzer does not explicitly teach:
defines the register number related to the anomalous state of the control system and the anomaly range
However, Chand teaches:
defines the register number related to the anomalous state of the control system and the anomaly range (Chand, The security program executes to receive a dynamic signature from a given control device through the network port, decrypt the dynamic signature, analyze the dynamic signature against rules establishing a multi-value range of acceptable dynamic signature values, and provide an output indicating whether the received dynamic signature is outside the multi-value range of acceptable dynamic signature values, [0012] The dynamic signature may include multiple time varying quantities wherein the rules establish multi-value ranges for each quantity, detect tampering from dynamic variables which do not conform to a static thumbprint through the use of ranges encompassing multiple values of the dynamic signature, [0018-0019] the operating thumbprint 70 for each mode 72 of the thumbprint table 62 designates a specific set of thumbprint source data 74, for example, the control program 46, the firmware operating system 48, the configuration register 50, and environmental data held in various components of the control device 16 including the wire connection states of the connection management circuit 40, its address and/or location in the factory environment (for example held in communication or memory modules), operating temperature and the like from distributed internal sensors, [0066] This digital operating thumbprint 70 is then transmitted to the remote security-monitoring device where it is compared with a corresponding stored thumbprint to establish within a reasonable probability according to the digest scheme that the source data 74 of the control device 16 has not been modified or tampered with, [0067]) [Examiner interprets that the thumbprint table 62 specifying which register (i.e., configuration register 50) are to me monitored and defining rules /ranges for each monitored value as limitation above] The rationale for motivation applies as claim 1.
Regarding claims 9 and 10, Claims 9 and 10 recite commensurate subject matter as claim 1. Therefore, they are rejected for the same reasons. Except the additional elements:
An anomaly detection method executed by an anomaly detection system (Kroyzer, methods, and devices for detecting anomalies in operating parameters of an industrial control system, [0002])
A non-transitory computer-readable recording medium having recorded thereon a computer program for causing a computer to execute the anomaly detection method (Kroyzer, a non-transitory computer-readable data medium encoded with a computer program that comprises computer code for applying the above method, [0099])
Regarding claim 11, Kroyzer, Chand, Umemoto, and Kaderábek further teaches the anomaly detection system of claim 1, wherein the processor
calculates an importance level based on
a level of impact caused when the control system enters the anomalous state (Umemoto, the “abnormality” of the equipment 10 is an indicator showing the soundness of the equipment 10 (“soundness indicators”), and a value that increases as the soundness of the equipment 10 becomes higher (that is, the possibility of being normal becomes higher), and decreases as the soundness of the equipment 10 becomes lower (that is, the possibility of being abnormal becomes higher), the “abnormality” may be a value indicating “the degree of abnormality” that increases as the possibility of being abnormal becomes higher, and decreases as the possibility of being normal becomes higher for the equipment 10. Also, the abnormality may be a flag value showing a value indicating normality when estimated as normal (for example, “0”), and a value indicating abnormality when estimated as abnormal (for example, “1”) [0042] The candidate selection section 175 may select a plurality of candidate control methods with a higher priority for those with a higher soundness indicator value. Also, if the simulation section 160 is configured to perform a dynamic simulation, the candidate selection section 175 may select, with a higher priority, the one among the plurality of candidate control methods for which the soundness indicator value becomes greater than the threshold value earlier, [0076]) [Examiner interprets that system quantifying abnormality severity (soundness indicator) , considering operational consequences such as production volume, maintenance period, using these values to prioritize control methods as limitation above] and
a transition time taken to enter the anomalous state (Umemoto, he equipment abnormality estimation section 140 may also estimate the abnormality of the equipment 10 using other conditions, such as a decreasing trend in the soundness indicator. This allows the equipment abnormality estimation section 140 to run a pre-simulation in a situation where an abnormality may occur in the future, and to switch the control method of the equipment 10, [0065] In S430, the simulation abnormality estimation section 165 is configured to estimate the future abnormality of the equipment 10 based on the simulation result by the simulation section 160 for each of the plurality of candidate control methods, that is, the measurement data of the equipment 10 assumed in the future calculated by the simulation. The simulation abnormality estimation section 165 may estimate the future abnormality of the equipment 10 using the same method as the equipment abnormality estimation section 140. The simulation abnormality estimation section 165 to estimate the future abnormality of the equipment 10 using the estimation model stored in the estimation model DB 135. Herein, when the simulation section 160 performs a dynamic simulation, the simulation abnormality estimation section 165 may estimate the indicator value of the abnormality of the equipment 10 at each time point in the future, [0075] The candidate selection section 175 may select a plurality of candidate control methods with a higher priority for those with a higher soundness indicator value. Also, if the simulation section 160 is configured to perform a dynamic simulation, the candidate selection section 175 may select, with a higher priority, the one among the plurality of candidate control methods for which the soundness indicator value becomes greater than the threshold value earlier, [0076] simulation section 160 is configured to calculate, as the future state of the equipment 10, that the production volume of the equipment 10 is 200 per day in case 1, 50 per day in case 2, and 100 per day in case 3, and that the equipment maintenance period of the equipment 10 is 4 days in case 1, 40 days in case 2, and 15 days in case 3, by a simulation, [0086]) [Examiner interprets that system evaluating when abnormality changes overtime, prioritizing based on how quickly threshold is crossed (time to threshold) as limitation above].
assigns the importance level to the register value included in the blacklist (Umemoto, the candidate DB 155 may store the set value of each control parameter of the equipment 10 and the measurement data of the equipment 10 in the current situation, [0047] The simulation abnormality estimation section 165 is configured to estimate the future abnormality of the equipment 10 based on the future state of the equipment 10 simulated by the simulation section 160 for each of the plurality of candidate control methods stored in the candidate DB 155, [0049] the candidate selection section 175 may score each candidate control method by weighting the soundness indicator value, the achievement rate of the operation target, and other selection indicator, and select the control method with the highest score., [0077]) [Examiner interprets that system assigning a score/priority to candidate control methods (i.e., register value), storing abnormality results in a database, using weighted scoring as limitation value]
Although Umemoto teaches assigning priority and storing in the database, it does not appear to explicitly teach:
assigns the importance level to the register value included in the blacklist
However, Kaderábek teaches:
assigns the importance level to the value included in the blacklist (Kaderábek, Each feature value bin also has a score, determined in this example by logistic regression, which when added to the scores for feature value bins for other features generates a final score for a particular network address… the feature bin score is 139, which suggests that seeing attacks in more than five honeypots from the same network address does strongly suggest that the network address is likely to be involved in future attacks, [0035] The features are extracted or derived from the data at 508, and the features for each network address are sorted into bins at 508. The feature scores associated with each bin for each feature for each of the network addresses in the current data set are summed at 510 to generate a total score for each network address, indicative of the predicted likelihood of each network address being the source of a future attack. The total score is compared to a threshold score at 512, such that if the total score meets or exceeds the threshold the associated network address is included in the blacklist but if the total score does not meet the threshold the associated network address is not included in the blacklist, [0042]) [Examiner interprets that system calculating a score (i.e., the importance level) tied to a network address (i.e., register value) and using that score to determine whether to assign to blacklist or not as limitation above].
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Kroyzer, Chand, and Umemoto to include a concept of assigns the importance level to the value included in the blacklist by the simulator as taught by Kaderábek for the purpose of analyzing the attack data to generate a predicted likelihood of future attacks from network addresses in the activity data, and a constructing network address blacklist including network addresses predicted likely to be a source of a future attack [Kaderábek:0007].
Regarding claim 12, Claim 12 recite commensurate subject matter as claim 11. Therefore, it is rejected for the same reasons
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20150347217 A1: “relate, in general, to processing environments, and in particular, to detecting anomalies in such environments”
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIKSHYA POUDEL whose telephone number is (703)756-1540. The examiner can normally be reached 7:30 AM - 5PM Mon- Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.N.P./Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436