Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 09/11/2025 have been entered. Claims 12, 16, 19-21 remain pending in the application.
Applicant’s amendments and arguments, with respect to claim rejections of claims 12-22 under
35 U.S.C 101 filed 04/11/2025 have been considered and are not persuasive. Therefore, the previous
rejections as set forth in the previous office action will be maintained.
The applicant argues that the amended claims are directed to a technological cybersecurity solution and cannot reasonably be characterized as reciting a mental process. Applicant maintains that each step of the claim requires computation, real-time processing, and machine learning based operations that exceed human cognitive capabilities. Applicant asserts that the examiner’s previous determination – that the claims recite mental processes – is no longer valid in view of the amended claim language.
Under step 2A, Prong one, applicant contends that the claims do not recite any mental process because none of the steps can be performed in the human mind. The applicant presents the following assertions:
Providing the digital representation: applicant argues that a human mind cannot generate a digital representation of a device’s current operational state, particularly while the device is actively operating in real time.
Characterizing using an unsupervised machine learning mode: applicant states that unsupervised machine learning techniques by definition exclude human involvement and cannot be performed mentally, making this step inherently beyond human capability.
Dynamically grouping devices: applicant asserts that forming groups based on similarity of current operation states requires real-time, on-the-fly adjustments accoss many devices, which a human mind cannot handle.
Ascertaining the outlier device by comparing vectors: applicant contends that the behavior vectors are complex machine-generated mathematical constructs, and meaningful comparison of these vectors to determine an outlier cannot be performed mentally, especially in a cybersecurity environment requiring quick detection.
Transmitting data identifying an outlier to an SOC: applicant argues that this step inherently requires generating and transmitting a communication signal over computer infrastructure, which cannot be performed in the human mind.
Applicant further argues that, even if a judicial exception were present, the claim satisfies Step 2A prong two because it is integrated into a practical application. Applicant asserts that the claim provides a specific improvement to the function of an automotive cybersecurity system, including improved adaptability to baseline shifts (such as software updates), reduce false positives, and enhance responsiveness to anomalous behavior. Applicants characterizes the claimed sequence – machine learning characterization, dynamic grouping, and relative comparison – as an unconventional ordered combination that solves concrete technical issues faces by conventional intrusion-detection systems.
With respect to Step 2b, applicant contends that the ordered combination of elements constitutes an inventive concept. Applicant argues that the ML- based characterization, dynamic state-based grouping, and outlier comparison collectively represent a non-conventional approach that improves cybersecurity system performance. Applicant further asserts the examiner has provided no evidence under Berkheimer to show that additional elements of their combination are well-understood, routine or conventional. Applicant also points to specification disclosure describing the benefits of transmitting outlier-related data to a security operations center or control unit as evidence of a technical improvement. Applicant concludes that the claims, view either as not reciting an abstract idea or as containing an inventive concept, are patent-eligible.
The examiner respectfully disagrees. Applicant’s argument that none of the claim limitations can be performed in the human mind is not persuasive. The amended claim continues to recite two limitations that squarely fail within the “mental process” category under MPEP 2106.04(a), regardless of the applicant’s attempt to characterize the system as technologically complex.
Dynamically grouping devices based on similarity of their operation state. The “grouping step” remains a mental process. A person can mentally observe characteristics of a plurality of items and group them according to similarity. The mere inclusion of the word “dynamically” does not remove this from the mental process grouping; it is simply an adverb describing the timing of the grouping. Applicant attempt to impose “real-time”, “fleet-wide scale”, and “computational intensity” is irrelevant because none of these claimed features are recited in any concrete manner. Under the broadest reasonable interpretation, the step still recites evaluating characteristics and grouping based on similarity – a mental process.
Ascertaining the outlier device by comparing the behavior characterization vector of each device to an average vector. The ascertaining by comparing step is also a mental process. The claim requires comparing one set of values to another set of values to determine which item deviates from the average. Comparing values, determining whether something is above or below an average, and deciding whether that deviation qualifies as an “outlier” are all forms of abstract mental judgement and mathematical evaluation. The assertion that a human “cannot meaningfully evaluate machine learning vector” is irrelevant. Under the broadest reasonable interpretation, the claim does not require any vector dimensionality, structure, or specific ML model. The claim merely requires comparing two values and determining which device deviates from the group average – an operation fully capable of mental performance. Therefore, these two limitations properly fall within the judicial exception of mental processes.
Applicant further argues that the remaining limitations transform the abstract idea into a practical application, which is incorrect. The remaining claims elements are properly treated as additional elements under MPEP 2106.05, and none integrate the judicial exception into a practical application.
“by at least one processor” This is merely a generic computer component used to apply the mental process. The phrase is nothing more than a drafting technique attempt to force the claim into the computer field, which does not integrate into an exception.
“providing a digital representation” This is insignificant extra-solution activity of a well-known technique as identified in MPEP 2106.05(g) and well-understood, routine, conventional activity as identified in MPEP 2106.05(d). The previous office action has already recited the Berkheimer evidence from the NPL “Digital Twins Grow Up” by Greengard et.al that the concept of digital representation is a well-understood, routine, conventional activity as introduced by Michael Grieves within the NPL. Accordingly, this step is conventional and non-transformative.
“characterizing ... by applying an unsupervised machine learning model” This recites nothing more than “apply a machine learning model to data”, in which MPEP 2106.05(f) identifies as a “mere instruction to apply an exception”. The claim does not provide any details about the model’s structure, training, architecture or improvement to computer function. Therefore, this limitation does not meaningfully limit the abstract idea.
“transmitting ... data identifying the outlier device to a security operations center” This step recites post-solution activity and is a generic communication operation. Transmitting data over a network is routinely help to be a well-understood, routine, conventional activity. As explained in MPEP 2106.05(g) and 2106.05(d), such activities do not integrate an exception.
Applicant repeatedly insists that these elements create a “technological improvement”, yet fails to identify any improvement actually recited in the claim. The argument cannot supply the claim limitations. The claim language itself merely performs mental evaluation using generic computing components.
With regard to step 2B, applicant asserts that the ordered combination constitutes an inventive concept. This argument is not persuasive. The additional claim elements – generic processors, data gathering steps, application of a machine learning model to data, and transmitting result – are merely conventional computer functions performed in their ordinary capacity and therefore cannot supply an inventive concept. Applicant’s assertion that the combination of these conventional operations forms an unconventional “workflow” is unsupported because the claim does not recite any non-routine configuration of computer components, machine learning algorithm, any improvement to the processor, memory and network, or any specific technical mechanism that departs form ordinary computer operation. Instead, the element operates together only to implement the underlying processes of grouping and comparing information, using generic computing infrastructure in its typical manner. Accordingly, the ordered combination does not transform the judicial exception into a patent-eligible application and does not amount to significantly more than the abstract idea.
Applicant further argues that the examiner’s attempt to rely on Berkheimer evidence is misplaced. The examiner has provided evidence – both through identification of routine computer operations and through citation to well-established MPEP guidance – that these elements are conventional.
Because the claim still recites abstract ideas of mental process of grouping based on similarity and comparing values to determine an outlier, and because the additional claim elements merely recite generic computer usage, insignificant data gathering, and routine post solution activity, the claim does not integrate the abstract idea into a practical application and does not amount to significantly more than the judicial exception. Accordingly, the rejections under 35 U.S.C 101 is maintained.
Applicant’s amendments and arguments, with respect to claim rejections of claims 12-22 under
35 U.S.C 103 filed 04/11/2025 have been considered and are not persuasive. Therefore, the previous
rejections as set forth in the previous office action will be maintained.
The applicant argues that The Patent Office fails to provide a proper reason to combine these references to arrive at the claimed invention. Dodson teaches away because Dodson identifies outliers to remove them from a dataset to improve the quality of clustering "normal" data. In contrast, the claimed invention uses the group of "normal" data to define the outlier. A person of ordinary skill in the art would not be motivated to use Dodson's teaching to achieve our result.
Even if combined, Dodson and Lavid do not disclose the invention as recited in the amended claims. As amended, the clams recite that the grouping is dynamic and based on the current operational state, directly traversing the conflation by the Patent Office with the static grouping in the prior art. The amended claims detail a specific workflow: the behavior is first characterized to generate a vector, and this vector is then used for a relative comparison within the dynamically formed peer group. The amended claims specify the use of unsupervised machine learning, highlighting its adaptive nature. Even if combined, Dodson/Lavid would at best suggest statically grouping devices by type and then analyzing their behavior. It would not teach or suggest the claimed feature of dynamically grouping devices based on their current, ML-characterized operational state. Nowhere in the combination of these references is this dynamic grouping disclosed. In addition, no reference teaches or suggests the dynamic, state-based grouping as claimed, nor the specific workflow where this grouping is based on a preceding ML characterization. Thus, the claims are not rendered obvious by the applied references. The Examiner's analysis of the original claims does not apply to the amended claims.
The examiner respectfully disagrees. Dodson does not teach away the amended claim. Dodson’s teaching of identification of outliers in data instances that lead to anomalous behavior is entirely consistent with the claimed use of a group of “normal” data to define an outlier. Applicant’s argument mischaracterizes Dodson’s disclosure. Dodson evaluates data instances attributes (e.g., CPU usage, memory usage, bandwidth) to detect deviations from expected behavior – precisely the analytical framework relied upon by the amended claims that analyze behavior of the digital representation of the physical device. Thus, Dodson does not contradict, discourage or criticize the claimed approach and therefore cannot be considered teaching away. Dodson discloses each data instance comprised of at least one principle value that represents an aspect or object of the computing environment over a period of time, which also represents their behavior for anomaly detection at paragraph 31 “In various embodiments, each data instance is comprised of at least one principle value that represents an aspect or object of the computing environment ... For example, if prior anomalies in increased CPU usage in a cloud were linked to malicious behavior, the principle values could include CPU usage aspects” and paragraph 68 “The data instances can each include a plurality of distinct attributes that are indicative of an entity or object. For example, in a computing environment, such as a cloud, tenants of the cloud may use various resources such as virtual machines, computing resources such as CPU, memory, applications, and so forth.”. The data instance is analogous to the digital representation that represent a current operational state of the respective physical devices
Furthermore, applicant contention that the prior art fails to disclose the amended workflow is not persuasive. Claim 12 has been amended to introduce the requirement that the characterization steps apply an unsupervised ML model to generate a behavior characterization vector, the examiner has reassessed the cited references in view of the revised claim language. The examiner has reassessed the cited references in view of the revised claim language. Dodson discloses applying unsupervised machine learning to data instances to generate feature vectors, which corresponds to the claimed characterization behavior vector at paragraph 29 “The use of unsupervised machine learning in various embodiments allows the system 105 to evaluate only the data instances available and examine these data instances for anomalous behavior in a self-referential manner”, and paragraph 76 “In some embodiments, the systems and methods disclosed herein create feature vectors from the data instances and attributes identified therein”. Accordingly, Dodson is relied upon for this limitation in the present rejection.
Moreover, applicant’s argument that the amended claims recite a “specific workflow” not found in the references is unavailing. Dodson already teaches extracting behavioral vectors via unsupervised machine learning and grouping those vectors based on similarity (paragraph 88 “For example, data instances of multiple tenants that have approximately similar CPU, memory, bandwidth, and other related resource values may be grouped together. In some instances, the features used to group the data instances are user selected. For example, the user may only want to group together data instances that have similar CPU and memory resource usage values. These selected or commonly shared features are referred to as clustering type(s)). Accordingly, Dodson teaches the amended claims of dynamically grouping devices based on their similar operational state. Furthermore, Dodson’s teaching also follows the workflow as recited within the amended claim. Dodson first discloses evaluating data instances using unsupervised machine learning (as recited above), wherein data instance can be configured as feature vectors as recited in paragraph 76. Then the data instance now represented as feature vectors are grouped based on their clustering type as recited in paragraph 84 “the method can also include a step 508 of grouping two or more of the data instances into one or more groups based on correspondence between the multi-dimensional feature vectors and a clustering type. For example, each feature vector component can be compared and used to group ... By way of non-limiting example, if the clustering type is “tenants using a specified amount of CPU capacity”, clustered tenants can be identified based on their shared feature vectors that fall within this clustering type.” Accordingly, not only Dodson teaches the amended claim but also teaches the similar workflow of the amended claim.
The examiner however, respectfully agrees that Dodson does not teach the amended claim of vector comparison within the dynamically formed peer group.
However, upon further consideration, new ground(s) of rejections have been raised (See Below.)
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 12, 16, 19-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 12,
Step 1:
Claim 12 recites a method, one of the four statutory categories of patentable subject matter.
Step 2A, Prong I:
Claim 12 further recites the limitations of:
“dynamically grouping, ..., the physical devices into at least one peer group based on a similarity of their current operational state as represented by their respective digital representations, wherein devices in the at least one peer group operate under similar operating conditions”. The limitation recites an abstract idea of a mental process. A person can mentally group a plurality of devices into various group based on their similarity. The human mind is capable of evaluating the similarity as well as group individual that share similarity traits. Therefore, the dynamically grouping step and similarity evaluation step is a mental process.
“ascertaining, ..., at least one outlier device by comparing the behavior characterization vector of each device within the at least one peer group to an average behavior of the at least one peer group”. The limitation recites an abstract idea of a mental process. A person can mentally compare the vector representing behavior of a device to an average vector representing average behavior within a group to determine if the vector representing the device is an outlier device. Vector representation are just numbers, wherein a human mind is capable of mentally handling the comparison of numbers. The person mind can compare if a number is greater than or less than the average number representing the group of devices, thereby determining that the compared device is an outlier device. The comparison and ascertaining step is a mental process.
Step 2A, Prong I:
Claim 12 further recites the limitations of:
“... by at least one processor ...” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application.
“providing, ..., at least one digital representation for each of the physical devices, wherein the digital representations represent a current operational state of the respective physical devices;” This additional element recites an insignificant extra-solution of a well-known technique as identified in MPEP 2106.05(g), and does not provide integration into a practical application.
“for each digital representation, characterizing, by the at least one processor, a behavior of the digital representation by applying a respective unsupervised machine learning model to data of the digital representation to generate a behavior characterization vector” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical. The limitation recites the application of conventional machine learning technique to process data. In particular, the limitation recites the applying of unsupervised machine learning model to data to generate a vector output, without providing the structure of the unsupervised machine learning model, unconventional unsupervised learning technique or improvement toward a functionality of a computer of hardware.
“transmitting, by the at least one processor, data identifying the at least one outlier device to a security operations center to trigger a security alert, thereby improving a functioning of the automotive cybersecurity system by providing an adaptive and context-aware intrusion detection” This additional element recites an insignificant extra-solution of a well-known technique as identified in MPEP 2106.05(g), and does not provide integration into a practical application.
Step 2B:
When considered individually or in combination, the additional limitations and elements of claim 12 does not amount to significantly more than the judicial exception for the same reasons discussed above as to why the additional limitations do not integrate the abstract idea into a practical application. The additional elements of outlined in Step 2A performing functions as designed simply accomplishes execution of the abstract ideas.
The additional element “... by at least one processor” recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not amount to significantly more than the judicial exception for the same reasons discussed above as to why the additional limitations do not integrate the abstract idea into a practical application.
The additional element “providing, ..., at least one digital representation for each of the physical devices, wherein the digital representations represent a current operational state of the respective physical devices” further recites well-understood, routine, conventional activity as identified in MPEP 2106.05(d)(II). The NPL “Digital Twins Grow Up” by Greengard et.al recites a quotation from Michael Grieves, chief scientist for advanced manufacturing at the Florida Institute of Technology and the originator that introduced the digital representation concept nearly two decades ago “We have reached a point where it's possible to have all the information embedded in a physical object reside within a digital representation”. Greengard recites a quotation from Michael Grieves that digital representation had become possible, wherein Michael Grieves introduced the concept of digital representation for devices nearly two decades ago, thus suggest a well-understood, routine, conventional activity of providing digital representation for devices. Accordingly, a conclusion that the providing step is well-understood, routine, conventional activity is supported under Berkheimer option II.
The additional element “for each digital representation, characterizing, by the at least one processor, a behavior of the digital representation by applying a respective unsupervised machine learning model to data of the digital representation to generate a behavior characterization vector” recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not amount to significantly more than the judicial exception for the same reasons discussed above as to why the additional limitations do not integrate the abstract idea into a practical application.
The additional element “transmitting, by the at least one processor, data identifying the at least one outlier device to a security operations center to trigger a security alert, thereby improving a functioning of the automotive cybersecurity system by providing an adaptive and context-aware intrusion detection” further recites well-understood, routine, conventional activity as identified in MPEP 2106.05(d). The court decisions cited in MPEP 2106.05(d)(II)(i) indicate that receiving or transmitting data over a network is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the transmitting step is well-understood, routine, conventional activity is supported under Berkheimer option II.
In conclusions from above for the elements considered as a mental process, elements reciting additional element of instruction to apply an exception as identified in MPEP 2106.05(f), element reciting well-understood, routine, conventional activity as identified in MPEP 2106.05(d), and element reciting an insignificant extra-solution of a well-known technique as identified in MPEP 2106.05(g) are carried over and do not provide significantly more than the abstract idea. Looking at the limitations in combination and the claims as a whole does not change this conclusion and the claim is ineligible.
Therefore, additional limitations of claim 12 do not amount to significantly more than the judicial exception. Thus, claim 12 recites abstract ideas with additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception.
Therefore, claim 12 is not patent eligible.
Regarding claim 16 depends on claim 12, thus the rejection of claim 12 is incorporated.
Claim 16 recites the element “identifying the at least one outlier as an anomaly” which further specify the mental process of claim 15. The identifying the at least one outlier as an anomaly is considered to be a mental process. A user can mentally determine the inoperable condition as an anomaly.
Thus, claim 16 recites abstract ideas at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 16 is not patent eligible.
Regarding claim 19 depends on claim 12, thus the rejection of claim 12 is incorporated.
Claim 19 recites the element “the machine learning method is a method of an unsupervised learning type” which recite a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f) and does not provide integration into a practical application, and does not amount to significantly more than the judicial exception.
Thus, claim 16 recites additional elements at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 19 is not patent eligible.
Regarding claim 20 which recites a device, one of the four statutory categories of patentable subject matter. The applicant is further directed to the rejection of claim 12, because claim 20 comprises similar limitations to claim 12, thus the claim is rejected under the same rationale.
Regarding claim 21 which recites a device, one of the four statutory categories of patentable subject matter.
Claim 21 recites the limitation “A non-transitory computer-readable memory medium on which are stored instructions for processing data associated with a plurality of physical devices, the instructions, when executed by a computer, causing the computer to perform”. This limitation is an additional element of a high-level recitation of generic computer components used as a tool, and does not provide integration into a practical application or amount to significantly more than the judicial exception.
The applicant is further directed to the rejection of claim 12, because claim 21 comprises similar limitations to claim 12, thus the claim is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 12, 16, 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Dodson et.al (US 20180316707 A1), in view of Heyrani-Nobari et.al (US 20210224918 A1), further in view of Lim et.al (US 20190095618 A1)
Regarding claim 12,
Dodson teaches the limitation “providing, by at least one processor, at least one digital representation for each of the physical devices, wherein the digital representations represent a current operational state of the respective physical devices;” (paragraph 28 “In some embodiments, using unsupervised machine learning, the exemplary system 105 can evaluate the data instances over time to detect anomalous behavior. In general, anomalous behavior can include any deviation in the data instances as viewed over time ... In another example, a brief spike in file transfer rates between a computing device and another computing device (possibly in a foreign country) can be flagged as anomalous”, and paragraph 31 “In various embodiments, each data instance is comprised of at least one principle value that represents an aspect or object of the computing environment ... For example, if prior anomalies in increased CPU usage in a cloud were linked to malicious behavior, the principle values could include CPU usage aspects.”, paragraph 68 “The data instances can each include a plurality of distinct attributes that are indicative of an entity or object. For example, in a computing environment, such as a cloud, tenants of the cloud may use various resources such as virtual machines, computing resources such as CPU, memory, applications, and so forth.” Dodson discloses systems and methods that use unsupervised machine learning to detect anomalous activity within one or more computing environments. Within the disclosure, Dodson discloses each data instance comprised of at least one principle value that represents an aspect or object of the computing environment over a period of time, which also represents their behavior for anomaly detection. The data instance is analogous to the digital representation that represent a current operational state of the respective physical devices, because the data instance also represents an object of the computing environment such as CPU or memory and their working behavior (e.g., CPU usage, memory usage), wherein the working behavior such as CPU usage corresponds to the current operational state of the respective physical devices. Although Dodson describes these working behaviors (e.g., CPU usage, memory usage) as being evaluated over a period of time, such measurements inherently reflect the device’s current behavior at the time of collection. Under the broadest reasonable interpretation, a measurement of usage of the CPU or the memory taken over a period of time constitutes the device’s current working behavior, as it reflects how the device is working at that moment. Accordingly, Dodson’s data instance is analogous to the digital representation that represent a current operational state of the respective physical devices within the claim.)
Dodson teaches the limitation “for each digital representation, characterizing, by the at least one processor, a behavior of the digital representation by applying a respective unsupervised machine learning model to data of the digital representation to generate a behavior characterization vector” (paragraph 29 “The use of unsupervised machine learning in various embodiments allows the system 105 to evaluate only the data instances available and examine these data instances for anomalous behavior in a self-referential manner”, and paragraph 76 “In some embodiments, the systems and methods disclosed herein create feature vectors from the data instances and attributes identified therein.” Dodson discloses the applying of unsupervised machine learning to evaluate the data instances for anomalous behavior. Dodson further teaches the system create feature vectors from these data instances and the attributes identified therein. As recited above, the data instance is analogous to the digital representation that represent a current operational state of the respective physical devices within the claim and further comprises working behaviors (e.g., CPU usage, memory usage), such that the data instance is indicative of the behaviors of the digital representation within the claim. Accordingly, the generated feature vector of data instance corresponds to the behavior characterization vector in the claim, and the generating a feature vector step corresponds to the characterizing step within the claim.)
Dodson teaches the limitation “dynamically grouping, by the at least one processor, the physical devices into at least one peer group based on a similarity of their current operational state as represented by their respective digital representations, wherein devices in the at least one peer group operate under similar operating conditions;” (paragraph 71 “... For example, the system administrator may desire to understand which entities are high bandwidth users, high memory users, or other similar types. These are generally referred to herein as a clustering type. The clustering type defines the desired features that are considered when evaluating data instance in a high-order analysis”, and paragraph 88 “For example, data instances of multiple tenants that have approximately similar CPU, memory, bandwidth, and other related resource values may be grouped together. In some instances, the features used to group the data instances are user selected. For example, the user may only want to group together data instances that have similar CPU and memory resource usage values. These selected or commonly shared features are referred to as clustering type(s).” Dodson discloses clustering/grouping data instances with similar or commonly shared clustering types (e.g., CPU usage, memory usage, bandwidth usage values) together to create clustered data instances that user may interact with. The clustering types may be user defined, and a person ordinary skilled in the art would recognize that the clustered data instances based on their similarity of the clustering types is analogous to the claimed process of dynamically grouping the physical devices into at least one group based on a similarity of their current operational state as represented by their respective digital representations.)
Dodson does not teach the limitation “ascertaining, by the at least one processor, at least one outlier device by comparing the behavior characterization vector of each device within the at least one peer group to an average behavior of the at least one peer group”. However, Heyrani-Nobari teaches this limitation (paragraph 5 “Entities may be clustered into entity clusters (e.g., member peer cluster, provider peer cluster, provider specialty cluster, and/or the like). Behavior signal values may be generated by comparing the entity vector corresponding to an entity with a cluster vector corresponding to an entity cluster with which the entity is affiliated and/or associated.”, and paragraph 74 “A cluster vector 704 (e.g., 704A, 704B) may be generated by aggregating the entity vectors 506 of the entities in the cluster ... aggregating the entity vectors 506 to generate the corresponding cluster vector 704 comprises performing an average of the entity vectors 506.” Heyrani-Nobari discloses method and system to determine behavior signals of an entity within a cluster with which the entity is associated. Within the disclosure, Heyrani-Nobari discloses comparing the entity vector with a cluster vector corresponding to an entity cluster with which the entity is affiliated to determine behavior signal values of the entity. In addition, Heyrani-Nobari explains that the cluster vector may be generated by aggregating the entity vectors of the entities in the cluster, wherein the aggregating comprises performing an average of the entity vectors. Heyrani-Nobari therefore discloses comparing an entity vector to a cluster vector generate by performing averaging. Accordingly, Heyrani-Nobari’s teaching of entity vector comparison with the cluster vector generated through the averaging process to determine behavior signals of the entity is analogous to the ascertaining an outlier device by comparing the behavior characterization vector of each device within the at group to an average behavior of the group within the claim, wherein the entity vector corresponds to the behavior characterization vector of a device and the aggregated (average) vector cluster corresponds to the claimed average behavior of the at least one peer group.)
Before the effective filing data of the invention, it would have been obvious a person ordinary skilled in the art to combine the teaching of systems and methods that use unsupervised machine learning to detect anomalous activity within a computer environment by Dodson with the teaching of comparison of an entity represented as vector with a cluster of entity vector that is aggregated by performing averaging by Heyrani-Nobari (paragraph 77 “one or more behavior signals corresponding to one or more entities and/or one or more clusters may be analyzed to identify behavior patterns, determine suggestions for provider improvement, identify anomalous behavior patterns, determine future behavior predictions, and/or the like.”, paragraph 79 “In an example embodiment, the analysis computing entity 65 may analyze one or more behavior signals, cluster vectors, entity vectors, claims vectors, and/or element vectors corresponding to the time period to identify suggestions that may improve a provider's performance and/or bring an entity vector corresponding to the provider into closer proximity with the corresponding cluster vector” Heyrani-Nobari discloses the benefit of vector analysis to determine the behavior signal of an entity in view of the cluster of other entities. The comparison of the entity vector to the cluster vector help determine one or more behavior signals corresponding to one or more entities, which may then be analyzed for further improvement suggestion, anomalous behavior patterns and identify suggestions that may improve performance associated with the entity. Incorporating such comparison method into Dodson would have predictably improved Dodson’s anomaly and outlier detection by enabling deviations to be measured relative to a representative average of the group, rather than evaluating data instance individually. A person od ordinary skilled in the art would have recognized that using an average cluster vector as a reference provides a more stable baseline for determining outliers, thereby enhancing the reliability of anomaly and outlier detection in Dodson’s system and method. Accordingly, Dodson’s cluster data instances represented as vector could be further analyzed using Heyrani-Nobari’s comparison method, which would have predictably improved detection of anomalous and outlier data instances. Under the broadest reasonable interpretation, Heyrani-Nobari’s entities correspond to Dodson’s devices represented by data instances because both references treat these elements as units whose behaviors are represented by vectors and grouped into clusters for behavior detection. Thus, a person ordinary skilled in the art would have recognized that both references perform the same functional role of behavior-producing units; therefore, Heyrani-Nobari’s disclosure is applicable to the outlier and anomaly detection method and system for data instance by Dodson.)
Dodson/Heyrani-Nobari does not teach the limitation “transmitting, by the at least one processor, data identifying the at least one outlier device to a security operations center to trigger a security alert, thereby improving a functioning of the automotive cybersecurity system by providing an adaptive and context-aware intrusion detection” However, Lim teaches this limitation (paragraph 38 “In this description, any references made to the atomic level refer to intrusion detection systems for monitoring networks or systems for malicious activities or policy violations. This atomic layer acts as the first layer of defence and comprises a plurality of detectors 130”, paragraph 39 “Detectors 130 may include, but are not limited to network devices such as firewalls 132, switches 134, operating systems 136, computing devices 138, intrusion detection systems and intrusion prevention system (IDS/IDP) 140. Although it is not illustrated in FIG. 1, in some implementations, the function performed by detectors 130 may be carried out at a computing system level. Accordingly, functions performed by detectors 130 may be carried out by physical computing systems, such as at end user computing systems, host computing systems, servers, and etc. and may also be further be represented by virtual computing systems, such as virtual machines running within host computers”, paragraph 41 “The centralized data analysis centres represent a second layer of defence against cyber-threats and these centres addresses the cyber-threats at a molecular level by analysing and processing all the individual triggers generated and transmitted from detectors ... Upstream sources or systems such as these centralized data analysis centres 110 include systems such as Security Information and Event Management systems and Security Operation Centre (SIEM/SOC) 112”, and paragraph 42 “In operation, at least one of these upstream sources (i.e. centralized data analysis centres 110) will receive fragments of data such as security alerts or triggers from any one of detectors 130 or from a plurality of detectors 130. The centralized data analysis centres 110 then perform inspections to determine whether the received alerts or triggers represent actual security issues.” Lim discloses a system and method capable of receiving and quantitatively unifying unstructured and/or unlabeled information security threat data from any source or system whereby the processed information is then provided back to all the upstream systems to actively tune and improve the security postures of these systems in near-real-time. Within the disclosure, Lim discloses a framework comprising various detectors implemented at each computing source (e.g., end user computing systems, servers, etc., ...) to detect anomaly or new security threat, in which when an anomaly is detected, a trigger is generated to be transmitted to centralized data analysis centre, wherein the centralized data analysis centre include systems Security Operation Centre (SOC) to analyze and handling the anomaly trigger. Under BRI, the claimed “outlier device” is a device whose reported data deviates from expected behavior i.e., represents an anomaly. Accordingly, Li’s anomaly-based detector and transmission of trigger to an SOC framework corresponds to the claimed process of transmitting data identifying the at least one outlier device to a security operations center to trigger a security alert.)
Before the effective filing data of the invention, it would have been obvious a person ordinary skilled in the art to combine the teaching of systems and methods that use unsupervised machine learning to detect anomalous and outlier activity within one or more computer environment by Dodson, and the teaching of comparison of an entity represented as vector with a cluster of entity vector that is aggregated by performing averaging by Heyrani-Nobari, with the teaching of Lim’s anomaly-based detector and transmission of trigger to an SOC by Lim. The motivation to do so is referred to in Lim’s disclosure (paragraph 11 “A second advantage of embodiments of systems and methods in accordance with the invention is that threat intelligence data of various sizes and formats may be easily consolidated in near-real-time and the consolidated data is then analysed to identify previously unknown data security threats.”, and paragraph 13 “A fourth advantage of embodiments of system and methods in accordance with the invention is that the invention acts as the mother-of-all security operation centres (SOCs) thereby negating the need for the existence of multiple SOCs for handling security events generated by a select few surveillance systems”, and paragraph 36 “The system in accordance with embodiments of the invention achieves this goal by receiving all types and sizes of unstructured and unlabelled machine learnt cyber threat data that have been generated by various types of smart cyber-security surveillance systems and/or network detectors in a non-sequential and random manner. The received machine learnt and random data are then unified by the system by translating the received unstructured data into a uniformed meta-format such as the Transportable Incident Format (TIF) and accumulating the TIF in a high density data store.” Lim discloses the advantages of consolidating thresh-intelligence information and anomaly indications from multiple distributed detectors into a centralized security operation center (SOC), wherein such data can be unified, analyzed in near real-time, and used to identify unknown threats while reducing the inefficiencies associated with multiple independent Soc system. Lim also discloses the SOC architecture is specifically designed to receive unstructured and unlabeled data to translate into a Transportable Incident Format (TIF) format suitable for high-density data store and threat correlation. In view of these benefits, a person ordinary skilled in the art would have been motivated to incorporate Lim’s centralized anomaly handling framework into the clustering and outlier detection in anomaly for computing environment by Dodson in view of Heyrani-Nobari. While Dodson recites identify anomaly and outlier at each computing environment as well as generating alert in case of anomaly detection, Dodson and Heyrani-Nobari does not provide a mechanism for correlating those per-device anomalies across a broader environment for improving overall threat intelligence in a large scale. Lim’s Soc architecture provides a predictable improvement by enabling the anomaly or outlier determination at each computing environment in Dodson to be aggreageted, normalized and analyzed collectively at a centralized Soc, thereby enhancing the ability to detect emerging or previously known security threats while mitigating the need of multiple SOC at each computing environment. Accoridngly, Lim provides a clear benefit – real-time consolidation and analysis of distributed anomaly signals – that would have motivated a person ordinary skilled in the art to combine Lim’s SOC framework with Dodson’s teaching.)
Regarding claim 16 depends on claim 12, thus the rejection of claim 12 is incorporated.
Dodson teaches the limitation “identifying the at least one outlier as an anomaly” (paragraph 36 “Anomaly detection as disclosed in various embodiments herein involves the comparison and evaluation of the at least one principle value changing over time. According to some embodiments, once sets are created from the data instances, the anomaly detection module 135 is executed to detect anomalies for the data”, and paragraph 90 “That is, the methods of determining and extracting outliers and singularities in data instances are fully capable of being used in conjunction with methods of anomaly detection”. Dodson discloses anomaly detection method based on comparison of principle values changing over time between each machine, wherein the method of identifying outliers can be used in conjunction with anomaly detection, such that a person ordinary skilled in the art can configure detecting outliers within a group of data instances above, wherein the outliers can be a negative value and the anomaly detection method may recognize this negative value of outliers as anomaly detection.)
Regarding claim 19 depends on claim 12, thus the rejection of claim 12 is incorporated.
Dodson teaches the limitation “wherein the machine learning method is a method of an unsupervised learning type” (paragraph 28 “In some embodiments, using unsupervised machine learning, the exemplary system 105 can evaluate the data instances over time to detect anomalous behavior. In general, anomalous behavior can include any deviation in the data instances as viewed over time.” Dodson discloses the using the unsupervised machine learning method to evaluate the data instances overtime to detect anomalous behavior.)
Regarding claim 20, the applicant is further directed to the rejection of claim 12, because claim 20 comprises similar limitations and processing steps to claim 12, thus the claim is rejected under the same rationale.
Regarding claim 21,
Dodson teaches the li