Prosecution Insights
Last updated: April 19, 2026
Application No. 18/469,314

ANOMALY DETECTION BASED ON BEHAVIOR MODELING LEARNED FROM MONITORED COMPUTER ACTIVITIES

Final Rejection §103
Filed
Sep 18, 2023
Examiner
XIE, EDGAR WANGSHU
Art Unit
2433
Tech Center
2400 — Computer Networks
Assignee
Mastercard Technologies Canada Ulc
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
14 granted / 17 resolved
+24.4% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Claims and Request for Reconsideration filed on 11/20/2025 for patent application 18/469,314 have been acknowledged. Claims 1-13 and 15-20 are currently pending and have been considered below. Claims 1, 12, and 20 are independent claims. Claims 1-2, 6-8, 11-13, 17, and 19-20 have been amended. No new claims have been added. In view of arguments presented on pages 11-12 of the remarks, the 35 U.S.C. 101 rejection of 1-20 has been withdrawn. In view of arguments presented on pages 10-11 of the remarks, the 35 U.S.C. 112(b) rejection of claim 10, 11, and 19 has been withdrawn. Response to Arguments Applicant’s arguments with respect to claims 1-13 and 15-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s argument with respect to dependent claim 3 has been considered and a new ground of rejection directing to a different paragraph from the same reference (McLean) has been applied in the current 103 rejection below. Thus, the 35 USC 103 rejection is maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 11-13, 15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Noon et al. (US Patent No. US 12,445,493 B2, hereinafter, Ben-Noon) in view of McLean (US Patent Application Publication No. US 2021/0273958 A1). Regarding Claim 1, Ben-Noon discloses: A system, comprising: a computer system comprising a processor programmed to: receive, from a monitored system, user-specific behavior data that indicates one or more types of computer activities requested by a requester (Ben-Noon, col 3, line 1-12, “In addition, user interactions with the SWB may be monitored locally or by CyberSafe security hub. As a result, communications between the UE and MyCompany and actions of a MyCompany user interfacing with the UE are substantially completely visible to CyberSafe and to MyCompany and may be processed by the SWB, the hub and/or other trusted components associated with MyCompany.”); identify a behavior profile that was generated during a training phase to learn behaviors of the requester, the behavior profile including information that identifies one or more computer activities that were monitored during the training phase (Ben-Noon, col 17, line 21-43, “in a block 306, browser SWB.sub.b uploads sets … to the CyberSafe security hub 52 (FIG. 1). … Expected values may be determined for a plurality of instances of session CCSESS.sub.n,s for user U.sub.n. … the expected values for a given user MyCompany user U.sub.n determine a user specific normal behavior pattern for a CCSESS.sub.n,s. … user specific normal behavior patterns and group normal behavior patterns determined by the CyberSafe hub and/or a browser SWB.sub.b are stored in a memory.”); provide, during a detection phase, the user-specific behavior data to a behavior classifier to detect whether the user-specific behavior data is anomalous (Ben-Noon, col 17, line 60 – col 18, line 4, “In a block 316, the given SWB.sub.b monitors current session CCSESS.sub.n′,s′ to accumulate, process locally and upload data for CCaaS-KPI(n′,s′), UE-KPI(n′,s′,e′), U-KPI(n′,s′), SMETA(n′,s′) for the current session … and/or to detect occurrence of anomalous events.”); generate, as an output of the behavior classifier, an anomaly classification based on the user-specific behavior data and the behavior profile, wherein the anomaly classification indicates a predicted anomalousness of the user-specific behavior data with respect to the behavior profile (Ben-Noon, col 18, line 5-30, “an anomalous event is an event that breaches normal behavior or an event that breaches MyCompany and/or CyberSafe policy. By way of example, a breach of a normal pattern may comprise a deviation of a given KPI monitored by the given SWB.sub.b from an expected value of the KPI by an amount greater than a standard deviation established for the KPI multiplied by a predetermined coefficient.”) and is used to determine whether a mitigative action is to be taken in response to the one or more types of activities requested by the requester (Ben-Noon, col 18, line 31-57, “in a decision block 320 the given SWB.sub.b determines if, based on CyberSafe hub 52 (FIG. 1) and/or MyCompany policy, the anomalous event warrants a response.”); and Ben-Noon does not explicitly teach the following limitation that McLean teaches: transmit, to the monitored system, the anomaly classification (McLean, ¶[0031], “(vi) communicate those triggered response(s), if any, with the users associated with that respective host endpoint agent 101A-B.” ¶[0032], “the AI based cyber security appliance 120 may use the at least one or more AI/machine learning models to analyze the pattern of life data for each host endpoint agent 101A-B, where each host endpoint agent 101A-B may be communicatively connected to one or more application programming interfaces (APIs) hosted by the AI based cyber security appliance 120.”). Ben-Noon in view of McLean is analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “cybersecurity systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ben-Noon with McLean to “transmit, to the monitored system, the anomaly classification” because, the disclosure teaches a multi-stage anomaly detector analyzes an anomalous process chain in real time and rapidly determines whether the process chain is indicative of a cyber threat on an endpoint computing device in a multi-host environment (McLean, Abstract). Regarding Claim 2, Ben-Noon in view of McLean teaches: The system of claim 1, wherein to receive the behavior data, the processor is further programmed to: receive the behavior data from an embedded agent of the computer system, the embedded agent operating at a device of the monitored system to monitor the one or more types of computer activities without modifying a process that provides the one or more computer activities (McLean, ¶[0028], “one or more host endpoint agents 101A-B configured to cooperate with an AI based cyber security appliance 120 over a network 110.” ¶[0037], “the host endpoint agent 101A-B may use both network and collections modules to discretely monitor and collect pattern of life data on each of the process chains executed on the endpoint computing device.”). Regarding Claim 3, Ben-Noon in view of McLean teaches: The system of claim 2, further comprising: a device of the monitored system, wherein the device is programmed via developer coding logic that encodes one or more monitoring parameters that each identifies a permitted type of computer activity that the embedded agent is permitted to monitor; and cause, based on the developer coding logic, the embedded agent to monitor only the permitted type of computer activity specified by the one or more monitoring parameters (McLean, ¶[0055], “the host endpoint agents 101A-E may be configured … to: … (ii) monitor the “pattern of life” of the end-point computing-device, its processes, such as Outlook, Word, etc., its users, events on that device, etc. This at least includes: … (c) user behavior (applications commonly used, IT habits).”). Regarding Claim 4, Ben-Noon in view of McLean teaches: The system of claim 2, further comprising: a device of the monitored system, wherein the device is programmed with the embedded agent to: identify the mitigative action based on the anomaly classification and one or more mitigation rules (McLean, ¶0031], “(iv) determine, if any, autonomous response(s) based on the comparison between the analyzed/scored data and the trained data, (v) trigger the determined autonomous response(s), if any, directly on the respective host endpoint agent 101A-B.”); and execute the mitigative action responsive to a request to perform the one or more computer activities (McLean, ¶[0046], “once the anomaly score is generated, the trigger module may be configured to initiate one or more preventive steps (or actions, responses, etc.) on that endpoint computing device.” ¶[0047], “in some embodiments, rather than a trigger module and/or a human user taking an action, the autonomous response module may be configured to cause one or more preventive steps/actions to be initiated to thereby contain a detected cyber threat, when/if a generated anomaly score is indicative of a likelihood of a cyber-threat that is equal to or above an actionable threshold.”). Regarding Claim 11, Ben-Noon in view of McLean teaches: The system of claim 1, wherein the processor is further programmed to: re-learn the behavior profile based on the behavior data and/or new behavior data (McLean,¶[0067], “The models may be a self-learning model trained on a normal behavior of each of these entities. The self-learning model of normal behavior is then continuously updated with actual behavior of that entity. The self-learning model of normal behavior is updated when new input data is received that is deemed within the limits of normal behavior.”). Regarding Claim 12, Ben-Noon discloses: A method, comprising: receiving, by a processor of a computer system, from a monitored system, user-specific behavior data that indicates one or more types of computer activities requested by a requester (Ben-Noon, col 3, line 1-12, “In addition, user interactions with the SWB may be monitored locally or by CyberSafe security hub. As a result, communications between the UE and MyCompany and actions of a MyCompany user interfacing with the UE are substantially completely visible to CyberSafe and to MyCompany and may be processed by the SWB, the hub and/or other trusted components associated with MyCompany.”); identifying, by the processor, a behavior profile that was generated during a training phase to learn behaviors of the requester, the behavior profile including information that identifies one or more computer activities that were monitored during the training phase (Ben-Noon, col 17, line 21-43, “in a block 306, browser SWB.sub.b uploads sets … to the CyberSafe security hub 52 (FIG. 1). … Expected values may be determined for a plurality of instances of session CCSESS.sub.n,s for user U.sub.n. … the expected values for a given user MyCompany user U.sub.n determine a user specific normal behavior pattern for a CCSESS.sub.n,s. … user specific normal behavior patterns and group normal behavior patterns determined by the CyberSafe hub and/or a browser SWB.sub.b are stored in a memory.”); providing, by the processor, during a detection phase, the user-specific behavior data to a behavior classifier to detect whether the user-specific behavior data is anomalous (Ben-Noon, col 17, line 60 – col 18, line 4, “In a block 316, the given SWB.sub.b monitors current session CCSESS.sub.n′,s′ to accumulate, process locally and upload data for CCaaS-KPI(n′,s′), UE-KPI(n′,s′,e′), U-KPI(n′,s′), SMETA(n′,s′) for the current session … and/or to detect occurrence of anomalous events.”); generating, by the processor, as an output of the behavior classifier, an anomaly classification based on the user-specific behavior data and the behavior profile, wherein the anomaly classification indicates a predicted anomalousness of the user-specific behavior data with respect to the behavior profile (Ben-Noon, col 18, line 5-30, “an anomalous event is an event that breaches normal behavior or an event that breaches MyCompany and/or CyberSafe policy. By way of example, a breach of a normal pattern may comprise a deviation of a given KPI monitored by the given SWB.sub.b from an expected value of the KPI by an amount greater than a standard deviation established for the KPI multiplied by a predetermined coefficient.”) and is used to determine whether a mitigative action is to be taken in response to the one or more types of activities requested by the requester (Ben-Noon, col 18, line 31-57, “in a decision block 320 the given SWB.sub.b determines if, based on CyberSafe hub 52 (FIG. 1) and/or MyCompany policy, the anomalous event warrants a response.”); and Ben-Noon does not explicitly teach the following limitation that McLean teaches: transmitting, by the processor, to the monitored system, the anomaly classification (McLean, ¶[0031], “(vi) communicate those triggered response(s), if any, with the users associated with that respective host endpoint agent 101A-B.” ¶[0032], “the AI based cyber security appliance 120 may use the at least one or more AI/machine learning models to analyze the pattern of life data for each host endpoint agent 101A-B, where each host endpoint agent 101A-B may be communicatively connected to one or more application programming interfaces (APIs) hosted by the AI based cyber security appliance 120.”). Ben-Noon in view of McLean is analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “cybersecurity systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ben-Noon with McLean to “transmitting, by the processor, to the monitored system, the anomaly classification” because, the disclosure teaches a multi-stage anomaly detector analyzes an anomalous process chain in real time and rapidly determines whether the process chain is indicative of a cyber threat on an endpoint computing device in a multi-host environment (McLean, Abstract). Regarding Claim 13, Ben-Noon in view of McLean teaches: The method of claim 12, wherein receiving the behavior data comprises: receiving the behavior data from an embedded agent of the computer system, the embedded agent operating at a device of the monitored system to monitor the one or more types of computer activities without modifying a process that provides the one or more computer activities (McLean, ¶[0028], “one or more host endpoint agents 101A-B configured to cooperate with an AI based cyber security appliance 120 over a network 110.” ¶[0037], “the host endpoint agent 101A-B may use both network and collections modules to discretely monitor and collect pattern of life data on each of the process chains executed on the endpoint computing device.”). Regarding Claim 15, Ben-Noon in view of McLean teaches: The method of claim 12, further comprising: identifying, by a device of the monitored system, the mitigative action based on the anomaly classification and one or more mitigation rules (McLean, ¶0031], “(iv) determine, if any, autonomous response(s) based on the comparison between the analyzed/scored data and the trained data, (v) trigger the determined autonomous response(s), if any, directly on the respective host endpoint agent 101A-B.”); and executing, by the device, the mitigative action responsive to a request to perform the one or more computer activities (McLean, ¶[0046], “once the anomaly score is generated, the trigger module may be configured to initiate one or more preventive steps (or actions, responses, etc.) on that endpoint computing device.” ¶[0047], “in some embodiments, rather than a trigger module and/or a human user taking an action, the autonomous response module may be configured to cause one or more preventive steps/actions to be initiated to thereby contain a detected cyber threat, when/if a generated anomaly score is indicative of a likelihood of a cyber-threat that is equal to or above an actionable threshold.”). Regarding Claim 19, Ben-Noon in view of McLean teaches: The method of claim 12, the method further comprising: re-learning the behavior profile based on the behavior data and/or new behavior data (McLean,¶[0067], “The models may be a self-learning model trained on a normal behavior of each of these entities. The self-learning model of normal behavior is then continuously updated with actual behavior of that entity. The self-learning model of normal behavior is updated when new input data is received that is deemed within the limits of normal behavior.”). Regarding Claim 20, Ben-Noon discloses: A computer readable medium storing instructions of an embedded agent that, when executed by one or more processors, program the one or more processors to: access a request by a requester to execute a computer activity (Ben-Noon, col 3, line 1-12, “In addition, user interactions with the SWB may be monitored locally or by CyberSafe security hub. As a result, communications between the UE and MyCompany and actions of a MyCompany user interfacing with the UE are substantially completely visible to CyberSafe and to MyCompany and may be processed by the SWB, the hub and/or other trusted components associated with MyCompany.”); obtain context data associated with the computer activity (Ben-Noon, col 16, line 55 – col 17, line 20, “A CCaaS-KPI(n,s) may by way of example comprise KPIs that provide values for at least one, or any combination of more than one of: CPU usage; memory usage; bandwidth usage; response time to a user's request; throughput; latency; request error rate; resources accessed; permission changes; and/or network requests.”); generate user-specific behavior data comprising an identification of the computer activity and the context data (Ben-Noon, col 17, line 60 – col 18, line 4, “in a block 314 a particular user U.sub.n′ using a given browser SWB.sub.b in a given UE.sub.e requests and is permitted access to and use of a particular My-CCaaS.sub.s′ and engages in a “current” session CCSESS.sub.n′,s′ with My-CCaaS.sub.s′. In a block 316, the given SWB.sub.b monitors current session CCSESS.sub.n′,s′ to accumulate, process locally and upload data for CCaaS-KPI(n′,s′), UE-KPI(n′,s′,e′), U-KPI(n′,s′), SMETA(n′,s′) for the current session to add to data already accumulated.”); transmit the user-specific behavior data to a computer system for anomaly classification of the user-specific behavior data (Ben-Noon, col 17, line 44 – col 18, line 4, “in a block 310, SWB.sub.b and/or the CyberSafe hub processes data provided by CCaaS-KPI(n,s), UE-KPI(n,s), U-KPI(n,s), and/or SMETA(n,s) to determine cyber vulnerabilities associated with MyCompany users using a My-CCaaS.sub.s and/or with a specific MyCompany user.”); the anomaly classification representing a prediction of an extent to which the computer activity deviates from a learned normal behavior based on previously learned computer activities of the requester (Ben-Noon, col 18, line 5-30, “an anomalous event is an event that breaches normal behavior or an event that breaches MyCompany and/or CyberSafe policy. By way of example, a breach of a normal pattern may comprise a deviation of a given KPI monitored by the given SWB.sub.b from an expected value of the KPI by an amount greater than a standard deviation established for the KPI multiplied by a predetermined coefficient.”); access one or more mitigation rules corresponding to the anomaly classification; and identify a mitigative action to take or no mitigative action to take based on the one or more mitigation rules (Ben-Noon, col 18, line 31-57, “in a decision block 320 the given SWB.sub.b determines if, based on CyberSafe hub 52 (FIG. 1) and/or MyCompany policy, the anomalous event warrants a response.”). Ben-Noon does not explicitly teach the following limitation that McLean teaches: receive an anomaly classification from the computer system (McLean, ¶[0031], “(vi) communicate those triggered response(s), if any, with the users associated with that respective host endpoint agent 101A-B.” ¶[0032], “the AI based cyber security appliance 120 may use the at least one or more AI/machine learning models to analyze the pattern of life data for each host endpoint agent 101A-B, where each host endpoint agent 101A-B may be communicatively connected to one or more application programming interfaces (APIs) hosted by the AI based cyber security appliance 120.”), Ben-Noon in view of McLean is analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “cybersecurity systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ben-Noon with McLean to “receive an anomaly classification from the computer system,” because, the disclosure teaches a multi-stage anomaly detector analyzes an anomalous process chain in real time and rapidly determines whether the process chain is indicative of a cyber threat on an endpoint computing device in a multi-host environment (McLean, Abstract). Claims 5-8 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Noon et al. (US Patent No. US 12,445,493 B2, hereinafter, Ben-Noon) in view of McLean (US Patent Application Publication No. US 2021/0273958 A1) and further in view of Liebner et al. (US Patent Application Publication No. US 2014/0327573 A1, hereinafter, Liebner). Regarding Claim 5, Ben-Noon in view of McLean discloses: The system of claim 1, wherein to generate the anomaly classification, the processor is further programmed to: Ben-Noon in view of McLean does not explicitly teach the following limitation that Liebner teaches: determine a vector value based on a monitored value of a computer activity and an expected value of the computer activity from the behavior profile (Liebner, ¶[0026], “Any abnormality or deviation from the baseline measurement or expected clock cycle value serves as a flag to the system indicating a possible threat. … timing comparator 206 is configured to compute a delta associated with the difference between a determined quantity of cycles of first data 220 and the, predetermined expected clock cycle value.”); and transform the vector value to a sub-classification score (Liebner, ¶[0026], “The absolute value of the delta is then assigned to the threat detection value”), wherein the anomaly classification is based on the sub-classification score (Liebner, ¶[0026], “In this way, a network operator is provided real-time data regarding the timing characteristics of a system and is immediately alerted to discrepancies or timing anomalies, which may serve as an indicator of a compromised GPS receiver.”). Ben-Noon in view of McLean and further in view of Liebner are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “cybersecurity systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ben-Noon in view of McLean with Liebner to “determine a vector value based on a monitored value of a computer activity and an expected value of the computer activity from the behavior profile; and transform the vector value to a sub-classification score, wherein the anomaly classification is based on the sub-classification score” because, disclosed are system, method, and computer program product embodiments for adapting to malware activity on a compromised computer system (Liebner, Abstract). Regarding Claim 6, Ben-Noon in view of McLean and further in view of Liebner teaches: The system of claim 5, wherein the behavior data comprises context data that specifies a context in which the computer activity was requested, and wherein the processor is further programmed to: contextual; contextual; context; context; contextual; contextual; contextual; (McLean, ¶[0037], “the collections module may also be used to gather/collect any desired pattern of life data points observed from that particular endpoint computing device. These observed pattern of life data points may include, but are not limited to, metadata, triggered events, newly detected process chains, and/or predetermined alerts pertaining to, for example, users, users' activities, various software processes, relationships between such software processes, device operations, altered operating system configurations, etc., as well as any other type of observed pattern of life data point selected to be sent with the communications module to the AI based cyber security appliance 120.”) determine a vector value based on a monitored value of the data for the computer activity and an expected value of the data from the behavior profile; and transform the vector value to a sub-classification score, wherein the anomaly classification is further based on the sub-classification score (Liebner, ¶[0026], “Any abnormality or deviation from the baseline measurement or expected clock cycle value serves as a flag to the system indicating a possible threat. … timing comparator 206 is configured to compute a delta associated with the difference between a determined quantity of cycles of first data 220 and the, predetermined expected clock cycle value. The absolute value of the delta is then assigned to the threat detection value. … In this way, a network operator is provided real-time data regarding the timing characteristics of a system and is immediately alerted to discrepancies or timing anomalies, which may serve as an indicator of a compromised GPS receiver.”). Regarding Claim 7, Ben-Noon in view of McLean and further in view of Liebner teaches: The system of claim of claim 6, wherein the context data comprises a time and/or date of the computer activity (Ben-Noon, col 16, line 55 – col 17, line 20, “comprise data components that provide values for at least one, or any combination of more than one of: ... Session ToD (Time of Day); session duration.” Col 34, line 14-21, “Talon may provide information such as how long a tab was open, length of user 907 activity in a service, what actions has the user 907 taken within the service.”). Regarding Claim 8, Ben-Noon in view of McLean and further in view of Liebner teaches: The system of claim of claim 6, wherein the context data comprises a rate of the computer activity over time (Ben-Noon, col 16, line 55 – col 17, line 20, “comprise data components that provide values for at least one, or any combination of more than one of: ... Session ToD (Time of Day); session duration.” Col 34, line 14-21, “Talon may provide information such as how long a tab was open, length of user 907 activity in a service, what actions has the user 907 taken within the service.”). Regarding Claim 16, Ben-Noon in view of McLean and further in view of Liebner teaches: The method of claim 12, wherein generating the anomaly classification comprises: determining a vector value based on a monitored value of a computer activity and an expected value of the computer activity from the behavior profile (Liebner, ¶[0026], “Any abnormality or deviation from the baseline measurement or expected clock cycle value serves as a flag to the system indicating a possible threat. … timing comparator 206 is configured to compute a delta associated with the difference between a determined quantity of cycles of first data 220 and the, predetermined expected clock cycle value.”); and transforming the vector value to a sub-classification score (Liebner, ¶[0026], “The absolute value of the delta is then assigned to the threat detection value”), wherein the anomaly classification is based on the sub-classification score (Liebner, ¶[0026], “In this way, a network operator is provided real-time data regarding the timing characteristics of a system and is immediately alerted to discrepancies or timing anomalies, which may serve as an indicator of a compromised GPS receiver.”). Regarding Claim 17, Ben-Noon in view of McLean and further in view of Liebner teaches: The method of claim 16, wherein the behavior data comprises context data that specifies a context in which the computer activity was requested, the method further comprising: contextual; contextual; context; context; contextual; contextual; contextual; (McLean, ¶[0037], “the collections module may also be used to gather/collect any desired pattern of life data points observed from that particular endpoint computing device. These observed pattern of life data points may include, but are not limited to, metadata, triggered events, newly detected process chains, and/or predetermined alerts pertaining to, for example, users, users' activities, various software processes, relationships between such software processes, device operations, altered operating system configurations, etc., as well as any other type of observed pattern of life data point selected to be sent with the communications module to the AI based cyber security appliance 120.”) determining a vector value based on a monitored value of the data for the computer activity and an expected value of the data from the behavior profile; and transform the vector value to a sub-classification score, wherein the anomaly classification is further based on the sub-classification score (Liebner, ¶[0026], “Any abnormality or deviation from the baseline measurement or expected clock cycle value serves as a flag to the system indicating a possible threat. … timing comparator 206 is configured to compute a delta associated with the difference between a determined quantity of cycles of first data 220 and the, predetermined expected clock cycle value. The absolute value of the delta is then assigned to the threat detection value. … In this way, a network operator is provided real-time data regarding the timing characteristics of a system and is immediately alerted to discrepancies or timing anomalies, which may serve as an indicator of a compromised GPS receiver.”). Claims 9-10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Noon et al. (US Patent No. US 12,445,493 B2, hereinafter, Ben-Noon) in view of McLean (US Patent Application Publication No. US 2021/0273958 A1) and further in view of Liebner et al. (US Patent Application Publication No. US 2014/0327573 A1, hereinafter, Liebner) and Pendergast et al. (US Patent Application Publication No. US 2020/0252421 A1, hereinafter Pendergast). Regarding Claim 9, Ben-Noon in view of McLean and further in view of Liebner teaches: The system of claim 6, and Ben-Noon in view of McLean and further in view of Liebner does not explicitly teach the following limitation that Pendergast teaches: wherein to generate the anomaly classification, the processor is further programmed to: apply a first weight to the sub-classification score and a second weight to the contextual sub- classification score, wherein the anomaly classification is based on the weighted sub-classification score and the weighted contextual sub-classification score (Pendergast, ¶[0042], “As stated above, the deprecation factor can be per source or per IOC type.” ¶[0045], “The calculated weighted criticality score 330, according to Formula 2 for a given IOC per source. As stated above, IOCs that are a part of known good data feeds 322 may receive a score of zero. For each known bad data feed 320, the confidence may be deprecated as disclosed earlier based upon the deprecation parameters of the specific source for each IOC type as described with regard to computation of the deprecated confidence value. The threat rating and the deprecated confidence value may be used to determine the IOC's criticality score per source. … Thus, the criticality score may be computed by mapping the threat rating and deprecated confidence through a nonlinear mathematical function. A weighted criticality score for each IOC may be computed as disclosed in Formula 2”). Ben-Noon in view of McLean and further in view of Liebner and Pendergast are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “cybersecurity systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ben-Noon in view of McLean and further in view of Liebner with Pendergast “wherein to generate the anomaly classification, the processor is further programmed to: apply a first weight to the sub-classification score and a second weight to the contextual sub- classification score, wherein the anomaly classification is based on the weighted sub-classification score and the weighted contextual sub-classification score” because, data feeds may be received, the data feeds may provide information about one or more indicators of compromise (IOC), and for each IOC, a weighted criticality score may be determined (Pendergast, Abstract). Regarding Claim 10, Ben-Noon in view of McLean and further in view of Liebner and Pendergast teaches: The system of claim 9, wherein the first weight and/or the second weight are each (Pendergast, ¶[0045], “The calculated weighted criticality score 330, according to Formula 2 for a given IOC per source. As stated above, IOCs that are a part of known good data feeds 322 may receive a score of zero. For each known bad data feed 320, the confidence may be deprecated as disclosed earlier based upon the deprecation parameters of the specific source for each IOC type as described with regard to computation of the deprecated confidence value. The threat rating and the deprecated confidence value may be used to determine the IOC's criticality score per source. … Thus, the criticality score may be computed by mapping the threat rating and deprecated confidence through a nonlinear mathematical function. A weighted criticality score for each IOC may be computed as disclosed in Formula 2”) initially predefined and then each updated based one retraining from new behavior data (McLean,¶[0067], “The models may be a self-learning model trained on a normal behavior of each of these entities. The self-learning model of normal behavior is then continuously updated with actual behavior of that entity. The self-learning model of normal behavior is updated when new input data is received that is deemed within the limits of normal behavior.”). Regarding Claim 18, Ben-Noon in view of McLean and further in view of Liebner and Pendergast teaches: The method of claim 17, wherein generating the anomaly classification comprises:applying a first weight to the sub-classification score and a second weight to the contextual sub- classification score, wherein the anomaly classification is based on the weighted sub-classification score and the weighted contextual sub-classification score (Pendergast, ¶[0042], “As stated above, the deprecation factor can be per source or per IOC type.” ¶[0045], “The calculated weighted criticality score 330, according to Formula 2 for a given IOC per source. As stated above, IOCs that are a part of known good data feeds 322 may receive a score of zero. For each known bad data feed 320, the confidence may be deprecated as disclosed earlier based upon the deprecation parameters of the specific source for each IOC type as described with regard to computation of the deprecated confidence value. The threat rating and the deprecated confidence value may be used to determine the IOC's criticality score per source. … Thus, the criticality score may be computed by mapping the threat rating and deprecated confidence through a nonlinear mathematical function. A weighted criticality score for each IOC may be computed as disclosed in Formula 2”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDGAR W XIE whose telephone number is (703)756-4777. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY PWU can be reached at (571)272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR W XIE/ Examiner, Art Unit 2433 /WASIKA NIPA/ Primary Examiner, Art Unit 2433
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Nov 14, 2025
Examiner Interview Summary
Nov 14, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Response Filed
Jan 29, 2026
Examiner Interview (Telephonic)
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602475
AGGREGATING INPUT/OUTPUT OPERATION FEATURES EXTRACTED FROM STORAGE DEVICES TO FORM A MACHINE LEARNING VECTOR TO CHECK FOR MALWARE
2y 5m to grant Granted Apr 14, 2026
Patent 12579267
Methods and Systems for Analyzing Environment-Sensitive Malware with Coverage-Guided Fuzzing
2y 5m to grant Granted Mar 17, 2026
Patent 12579281
Dynamic Prioritization of Vulnerability Risk Assessment Findings
2y 5m to grant Granted Mar 17, 2026
Patent 12566844
SYSTEM AND METHOD FOR COLLABORATIVE SMART EVIDENCE GATHERING AND INVESTIGATION FOR INCIDENT RESPONSE, ATTACK SURFACE MANAGEMENT, AND FORENSICS IN A COMPUTING ENVIRONMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12513001
BLOCKCHAIN VERIFICATION OF DIGITAL CONTENT ATTRIBUTIONS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+37.5%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month