Prosecution Insights
Last updated: April 19, 2026
Application No. 17/837,196

ARTIFICIAL INTELLIGENCE DETECTION OF RANSOMWARE ACTIVITY PATTERNS ON COMPUTER SYSTEMS

Final Rejection §103
Filed
Jun 10, 2022
Examiner
VU, TAYLOR P
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
21 granted / 26 resolved
+22.8% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The present office action is responsive to communications filed on 06/06/2025. Claims 1,11, and 16 have been amended. Claim 3 have been cancelled. Claims 1, 2, 6-12, 15-17, and 20 are currently pending. Applicant’s arguments files on 06/06/2025 with respect to rejection of claim 1, 3, 7, 9, and 10 under the 35 USC 103 over Stepanek et al. (US PGPub No. 20190251259-A1) in view of Saxena et al. (US PGPub No. 20160117497-A1), Brown et al. (US PGPub No.20190207969-A1), Annen et al. (US PGPub No.20220247766-A1), and Burgess et al. (US PGPub No. 20130133026-A1) specifically the amended limitations of steps of wherein behaviors include events or configurations that occur prior to an encryption of files stored in the computing system by the ransomware software have been fully considered and are persuasive. Therefore, rejection have been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Challita et al. (US PGPub No. 20180248896-A1 ), Agrawal et al. (US PGPub No. 20220294715-A1), Yadav (US PGPub No. 20160359695-A1), and Ross et al. (US PGPub No 20170230477-A1) The office action has been updated reflecting the claims as currently presented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 7, 8, 9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Agrawal et al. (US PGPub No. 20220294715-A1). With respect to claim 1, Challita teaches a system for detection and prevention of threats posed by malware software on computing system, the system comprising: (¶0014: An anti-ransomware system for a computer system has a deception component comprising a decoy module configured to place decoy segments within one or more file systems, a detection component comprising a behavioral analysis module configured to analyze the behavior of a suspected ransomware, and a response component. ) a first computing platform including a first memory and one or more first processing devices in communication with the first memory, wherein the first memory stores instructions that are executable by the one or more first processing devices and configured to: (¶00033-0036: In the below, “computer” is defined as any electronic, computational device including personal computers like laptops, one or more servers interconnected within the cloud, and smartphones and other personal devices, as well as IoT (Internet of Things) devices, individually or multiple, networked units. With reference to Figure 2, in an embodiment, a centralized database for use by the system resides in the cloud 1, while the deception component 2, the detection component 4, and the response component 6 reside on securely connected devices.). determine behaviors of a computing system that occur in a presence of malware software, wherein the malware software is a ransomware software, (¶0034-0036: With reference to Figure 1, the software agent comprises three major components, a deception component 2, a detection component 4, and a response component 6. The deception component contains a decoy component 10, which comprises files and/or folders that are placed strategically throughout the computer storage, and which may be periodically updated to update a time stamp or show recent activity. As soon as certain actions are taken on the decoys, such as encryption, detection, writing or editing, the detection component is notified. The goal of decoys is to detect ransomware encryption operations, and slow down the ransomware from achieving its objectives. The purposes of the decoys, without limitation, may comprise i) alerting about ransomware-like behavior, ii) alerting about “snooping” on the computer, iii) potentially storing anti-malware components disguised as decoys, iv) slowing down the encryption process, yielding additional response time, v) deterring attackers, vi) allowing additional opportunities to recover the key, or learn how to recover files.). wherein the behaviors include events or configurations that occur prior to an encryption of files stored in the computing system by the ransomware software in preparation for self-encryption of files including one or more of (i) disk input/output calls, (ii) memory utilization, (iii) processing unit utilization, (iv) types of calls made to operating system, (v) ports and protocols used for calls, and (vi) attempts to escalate access privileges, (¶0038: The kernel software 20 provides the ability to i) monitor and analyze all User-Mode applications and processes running, ii) monitor all operations on the file system on the machine, including read/write operations on the files, iii) having permissions and rights to respond to suspicious actions of any running process or application, and iv) perform all of the above at a fast pace (much faster than user-mode) to detect and contain suspicious attacks, before they encrypt files..). train, one or more Artificial Intelligence (AI) algorithms, to (i) monitor for the behaviors within a specified computing system, and (ii) in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds an acceptable baseline level for the at least one of the one or more behaviors, perform one or more actions to mitigate or eliminate a threat posed by the malware software; and behaviors exceeds an acceptable baseline level for the at least one of the one or more behaviors, (¶0039-0040: The detection component also has a machine learning component 22 and a behavioral analysis component 24. The machine-learning component determines a baseline of machine behavioral, for a particular machine, to be established. As a pattern of massive change of individual files is potentially indicative of ransomware, as these actions are similar to actions habitually taken by ransomware once it starts operating, if files are changed massively (beyond a predetermined threshold) within a short time, the machine-learning component 22 is consulted. The component 22 determines a baseline for different files in different location, as to normal usage, to provide a baseline for benign, normal user activity. The system must learn to identify them to avoid taking action when these benign activities are undertaken. Through machine learning, the system determines normal use thresholds for file changes and stores these thresholds for future reference. The machine learning observes the normal processes of the machine, including behavior that results in large changes at one time to particular files, such as compressing or encrypting files within normal use of the computer, that weren't previously encrypted or representing user content. In an embodiment, once a file change activity exceeds a threshold, the system stops monitoring and takes action by notifying the response component 6.). a second computing platform including a second memory and one or more second processing devices in communication with the first memory, wherein the second memory stores the trained one or more AI algorithms that are executable by the one or more processing devices and configured to: monitor for the behaviors within the specified computing system, and in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform one or more actions to mitigate or eliminate the threat posed by the malware software, wherein the acceptable baseline level is dynamically assigned based at least one of current (i) malware threat levels and (ii) utilization of the specified computing system. (¶0042: Monitoring for clustering detects rapid file manipulation or conversion activity of a process. Rapid file activity generally means many file changes occur in short duration of time. The threshold is determined by the machine learning observing normal usage for a period of time (1 day or 1 week) (utilization of specified computing system) based on the fact of ransomware being unlikely to strike within that early learning period.). Challita does not disclose: train, one or more Artificial Intelligence (AI) algorithms, a second computing platform including a second memory and one or more second processing devices in communication with the first memory, wherein the second memory stores the trained one or more AI algorithms that are executable by the one or more processing devices and configured to: monitor for the behaviors within the specified computing system, and in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform one or more actions to mitigate or eliminate the threat posed by the malware software, However, Agrawal teaches train, one or more Artificial Intelligence (AI) algorithms, train, one or more Artificial Intelligence (AI) algorithms, (¶0095:As seen in Figure 3, in various implementations, network communication anomaly manager 370 can update the network communication anomaly conditions and/or trained machine learning model 341 based on observed activity on device 305. In one example, network communication anomaly conditions 341 can be updated based on an observed increase in detected anomalies for device 305. In such instances, each detected anomaly can be stored with the communication data 340. As additional anomalies are detected, anomaly detection operation module 374 can monitor the total number of anomaly conditions detected over a period of time. Anomaly detection operation module 374 can then determine whether the number of detected anomalies satisfies a predetermined threshold. ). a second computing platform including a second memory and one or more second processing devices in communication with the first memory, wherein the second memory stores the trained one or more AI algorithms that are executable by the one or more processing devices and configured to: monitor for the behaviors within the specified computing system (¶0124-0125: As seen in Figure 7, at block 705 of method 700, an edge device or IoT device receives a trained machine learning model. At 710, the device stores the trained machine learning model in memory. The received trained machine learning model may be a retrained machine learning model, which may replace a previously received trained machine learning model in some instances. At block 715, processing logic monitors network traffic of the IoT/edge device. At block 720, processing logic inputs network communication data (network traffic data) into a trained machine learning model trained to process network communication data and output an indication as to whether or not an anomaly is detected. The machine learning model may also be trained to output a severity level of detected anomalous activity.). , and in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, (¶0090-0091: As seen in Figure 3, in various implementations, anomaly determiner 373 can make the severity determination by comparing the determined severity value for the detected anomaly to a threshold. If the severity value satisfies the threshold (e.g., meets or exceeds a threshold value associated with high severity anomalies), anomaly determiner 373 can determine that the anomaly is high severity and invoke anomaly detection operation module 374 to take appropriate action for high severity anomalies as described below.). perform one or more actions to mitigate or eliminate the threat posed by the malware software, (¶0126-0127: At block 730, processing logic determines whether an anomaly is detected. If so, the method continues to block 735. Otherwise, the method proceeds to block 740. At block 735, processing logic performs a remedial action in view of the detected anomalous activity. This may include performing an anomaly detection operation for the device, as discussed in detail above. In some embodiments, the action performed is based on a detected severity level of the anomalous activity.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Agrawal with regards to training, one or more Artificial Intelligence and a second computing platform to the method of Challita in order to protect the device and its associated against unauthorized access and the system can more readily identify anomalies in behaviors (Agrawal : ¶0014 & ¶0061). With respect to claims 7, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) wherein the instructions configured to determine the one or more behaviors of the computing system that occur in the presence of the malware software are further configured to determine, implementing Artificial Intelligence (AI) and Machine Learning (ML), the one or more behaviors of the computing system that occur in the presence of the malware software. (Challita: ¶0040-0041: As a pattern of massive change of individual files is potentially indicative of ransomware, as these actions are similar to actions habitually taken by ransomware once it starts operating, if files are changed massively (beyond a predetermined threshold) within a short time, the machine-learning component 22 is consulted. Clustering techniques allow the detection of large numbers of file changes in a short amount of time, in real time. Clustering algorithms that may be used, without limitation, include hierarchical clustering and centroid-based clustering. Along with the use of decoy files or data, clustering forms an additional line of defense that flags a process that is performing file changes quickly, early in its operation, in one embodiment determined by the timestamp of the event. In addition, certain operations occurring during the beginning stages of ransomware execution are monitored and used for detection.). With respect to claims 8, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) wherein the AI algorithms are further configured to determine the one or more actions by applying action rules to the detected behaviors. (Challita: ¶0040-0041: The machine learning observes the normal processes of the machine, including behavior that results in large changes at one time to particular files, such as compressing or encrypting files within normal use of the computer, that weren't previously encrypted or representing user content. In an embodiment, once a file change activity exceeds a threshold, the system stops monitoring and takes action by notifying the response component 6.). With respect to claims 9, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) wherein the first instructions are further configured to train, the one or more AI algorithms, to further monitor for one or more predetermined indicators that indicate the presence of the malware software and wherein the one or more actions are configured to be performed in further response to detection of at least one of the one or more predetermined indicators. (Challita: ¶0042-0048: Clustering monitoring works using two parameters: inter-cluster distance and critical cluster size. The time stamps of file changes made by a process are recorded and compared; if they are close together in time (less than inter-cluster distance), then they may be designated as part of the same cluster. If a cluster reaches the critical cluster size, determined by the pre-determined criteria resulting in optimal parameters, the process is designated as effecting rapid activity. The two parameters are determined by the machine-learning component to reduce the number of false positives. To reduce false positives, however, secondary features are used. Such features include: i) measuring an increase of entropy of files, ii) observing changes in file extensions (magic numbers), and iii) observing dissimilarity of files before and after using a similarity-hash, such as sdhash or other implementations of similarity hashing known in the art. The response component 6 comprises a suspend/kill process module 30, a restore module 32 to restore files on demand, a capture encryption key module 34, and an eradicate/quarantine module 36.). With respect to claim 11, Challita teaches a computer-implemented method for detection and prevention of threats posed by malware software on computing system, the computer- implemented method is executable by one or more computing processor devices, (¶0017-0018: In an embodiment, an anti-ransomware method is disclosed and has the steps of operating a deception component, wherein a decoy module of the deception component places and monitors decoy segments within one or more file structures, operating a detection component wherein a machine learning module of the detection component determines a file system baseline for the computer file structure, and a behavioral analysis module analyzes a suspected ransomware, and operating a response component which responds to a suspected ransomware by an action selected from the group consisting of suspending the suspected ransomware process, restoring files from a backup, capturing an encryption key, and quarantining the suspected ransomware.). the method comprising: determining behaviors of a computing system that occur in a presence of malware software, wherein the malware software is a ransomware software, (¶0034-0036: With reference to Figure 1, the software agent comprises three major components, a deception component 2, a detection component 4, and a response component 6. The deception component contains a decoy component 10, which comprises files and/or folders that are placed strategically throughout the computer storage, and which may be periodically updated to update a time stamp or show recent activity. As soon as certain actions are taken on the decoys, such as encryption, detection, writing or editing, the detection component is notified. The goal of decoys is to detect ransomware encryption operations, and slow down the ransomware from achieving its objectives. The purposes of the decoys, without limitation, may comprise i) alerting about ransomware-like behavior, ii) alerting about “snooping” on the computer, iii) potentially storing anti-malware components disguised as decoys, iv) slowing down the encryption process, yielding additional response time, v) deterring attackers, vi) allowing additional opportunities to recover the key, or learn how to recover files.). wherein the behaviors include events or configurations that occur prior to an encryption of files stored in the computing system by the ransomware software in preparation for self-encryption of files including one or more of (i) disk input/output calls, (ii) memory utilization, (iii) processing unit utilization, (iv) types of calls made to operating system, (v) ports and protocols used for calls, and (vi) attempts to escalate access privileges; (¶0038: The kernel software 20 provides the ability to i) monitor and analyze all User-Mode applications and processes running, ii) monitor all operations on the file system on the machine, including read/write operations on the files, iii) having permissions and rights to respond to suspicious actions of any running process or application, and iv) perform all of the above at a fast pace (much faster than user-mode) to detect and contain suspicious attacks, before they encrypt files..). training, one or more Artificial Intelligence (AI) algorithms, to (i) monitor for the behaviors within a specified computing system, and (ii) in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds an acceptable baseline level for the at least one of the one or more behaviors, perform one or more actions to mitigate or eliminate a threat posed by the malware software; and (¶0039-0040: The detection component also has a machine learning component 22 and a behavioral analysis component 24. The machine-learning component determines a baseline of machine behavioral, for a particular machine, to be established. As a pattern of massive change of individual files is potentially indicative of ransomware, as these actions are similar to actions habitually taken by ransomware once it starts operating, if files are changed massively (beyond a predetermined threshold) within a short time, the machine-learning component 22 is consulted. The component 22 determines a baseline for different files in different location, as to normal usage, to provide a baseline for benign, normal user activity. The system must learn to identify them to avoid taking action when these benign activities are undertaken. Through machine learning, the system determines normal use thresholds for file changes and stores these thresholds for future reference. The machine learning observes the normal processes of the machine, including behavior that results in large changes at one time to particular files, such as compressing or encrypting files within normal use of the computer, that weren't previously encrypted or representing user content. In an embodiment, once a file change activity exceeds a threshold, the system stops monitoring and takes action by notifying the response component 6.). monitoring, by the one or more AL algorithms, for the behaviors within the specified computing system, and in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, performing, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, wherein the acceptable baseline level is dynamically assigned based at least one of current (i) malware threat levels and (ii) utilization of the specified computing system. (¶0042: Monitoring for clustering detects rapid file manipulation or conversion activity of a process. Rapid file activity generally means many file changes occur in short duration of time. The threshold is determined by the machine learning observing normal usage for a period of time (1 day or 1 week) (utilization of specified computing system) based on the fact of ransomware being unlikely to strike within that early learning period.). Challita does not disclose: training, one or more Artificial Intelligence (AI) algorithms, monitoring, by the one or more AL algorithms, for the behaviors within the specified computing system, and in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, performing, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, However, Agrawal teaches training, one or more Artificial Intelligence (AI) algorithms, (¶0095:As seen in Figure 3, in various implementations, network communication anomaly manager 370 can update the network communication anomaly conditions and/or trained machine learning model 341 based on observed activity on device 305. In one example, network communication anomaly conditions 341 can be updated based on an observed increase in detected anomalies for device 305. In such instances, each detected anomaly can be stored with the communication data 340. As additional anomalies are detected, anomaly detection operation module 374 can monitor the total number of anomaly conditions detected over a period of time. Anomaly detection operation module 374 can then determine whether the number of detected anomalies satisfies a predetermined threshold. ). monitoring, by the one or more AL algorithms, for the behaviors within the specified computing system, and in response to detecting at least one of the one or more behaviors and (¶0124-0125: As seen in Figure 7, at block 705 of method 700, an edge device or IoT device receives a trained machine learning model. At 710, the device stores the trained machine learning model in memory. The received trained machine learning model may be a retrained machine learning model, which may replace a previously received trained machine learning model in some instances. At block 715, processing logic monitors network traffic of the IoT/edge device. At block 720, processing logic inputs network communication data (network traffic data) into a trained machine learning model trained to process network communication data and output an indication as to whether or not an anomaly is detected. The machine learning model may also be trained to output a severity level of detected anomalous activity.). determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, (¶0090-0091: As seen in Figure 3, in various implementations, anomaly determiner 373 can make the severity determination by comparing the determined severity value for the detected anomaly to a threshold. If the severity value satisfies the threshold (e.g., meets or exceeds a threshold value associated with high severity anomalies), anomaly determiner 373 can determine that the anomaly is high severity and invoke anomaly detection operation module 374 to take appropriate action for high severity anomalies as described below.). performing, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, (¶0126-0127: At block 730, processing logic determines whether an anomaly is detected. If so, the method continues to block 735. Otherwise, the method proceeds to block 740. At block 735, processing logic performs a remedial action in view of the detected anomalous activity. This may include performing an anomaly detection operation for the device, as discussed in detail above. In some embodiments, the action performed is based on a detected severity level of the anomalous activity.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Agrawal with regards to training, one or more Artificial Intelligence and a second computing platform to the method of Challita in order to protect the device and its associated against unauthorized access and the system can more readily identify anomalies in behaviors (Agrawal : ¶0014 & ¶0061). Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Agrawal et al. (US PGPub No. 20220294715-A1) and Rao et al. (US PGPub No. 20160203221-A1 ) . With respect to claim 2, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) but does not disclose wherein the system is operating system- agnostic. However, Rao teaches wherein the system is operating system-agnostic. (¶0211: The machine learning model framework is kept system-agnostic is to keep the document and query always representable in JSON the core library and its functions 1 can be use. This way the feature extraction, scoring and explanation can be utilized with third party off the shelf commodity technology.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the known teachings of Rao with regards to the system operating system-agnostic to the method of Challita in view of Agrawal in order to lessen the need to overhaul because the system does not rely on third party components (components that is outside the system) which may become prone to changes overtime. With respect to claims 12, the combination of Challita in view of Agrawal teaches the method of claim 11 (see rejection of claim 11 above) but does not disclose wherein the method is operating system-agnostic. However ,Rao teaches wherein the method is operating system-agnostic. (¶0211: The machine learning model framework is kept system-agnostic is to keep the document and query always representable in JSON the core library and its functions 1 can be use. This way the feature extraction, scoring and explanation can be utilized with third party off the shelf commodity technology.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the known teachings of Rao with regards to the system operating system-agnostic to the method of Challita in view of Agrawal in order to lessen the need to overhaul because the system does not rely on third party components (components that is outside the system) which may become prone to changes overtime. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Agrawal et al. (US PGPub No. 20220294715-A1) and Yadav et al. (US PGPub No. 20160359695-A1) . With respect to claims 6, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) but does not disclose wherein the instructions configured to determine the one or more behaviors of the computing system that occur in the presence of the malware software are further configured to determine a pattern of behaviors that occur in the presence of the malware software and instructions configured to train, the AI algorithms, to monitor for the behaviors are further configured to train, the one or more AI algorithms, to monitor for the pattern of behaviors. However, Yadav teaches wherein the instructions configured to determine the one or more behaviors of the computing system that occur in the presence of the malware software are further configured to determine a pattern of behaviors that occur in the presence of the malware software and instructions configured to train, the AI algorithms, to monitor for the behaviors are further configured to train, the one or more AI algorithms, to monitor for the pattern of behaviors. (¶0048-0049: As seen in Figure 1, in certain embodiments, the analytics module 30 may use machine learning techniques to identify security threats to a network using the anomaly detection module 34. Since malware is constantly evolving and changing, machine learning may be used to dynamically update models that are used to identify malicious traffic patterns. Machine learning algorithms are used to provide for the identification of anomalies within the network traffic based on dynamic modeling of network behavior. The anomaly detection module 34 may be used to identify observations which differ from other examples in a dataset. For example, if a training set of example data with known outlier labels exists, supervised anomaly detection techniques may be used. Supervised anomaly detection techniques utilize data sets that have been labeled as “normal” and “abnormal” and train a classifier.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the known teachings of Yadav with regards to the training AI algorithms to monitor for patterns of behaviors to the method of Challita in view of Agrawal in order to provide comprehensive and pervasive information about behavior over time and to identify activity potentially indicative of malicious behavior(Yadav ¶0017-0019) . With respect to claims 15, the combination of Challita in view of Agrawal teaches the method of claim 11 (see rejection of claim 11 above) but does not disclose wherein determining the one or more behaviors of the computing system that occur in the presence of the malware software are further include determining a pattern of behaviors that occur in the presence of the malware software and training, the one or more AI algorithms, to monitor for the behaviors further includes training, the one or more AI algorithms, to monitor for the pattern of behaviors. However, Yadav teaches wherein determining the one or more behaviors of the computing system that occur in the presence of the malware software are further include determining a pattern of behaviors that occur in the presence of the malware software and training, the one or more AI algorithms, to monitor for the behaviors further includes training, the one or more AI algorithms, to monitor for the pattern of behaviors. (¶0048-0049: As seen in Figure 1, in certain embodiments, the analytics module 30 may use machine learning techniques to identify security threats to a network using the anomaly detection module 34. Since malware is constantly evolving and changing, machine learning may be used to dynamically update models that are used to identify malicious traffic patterns. Machine learning algorithms are used to provide for the identification of anomalies within the network traffic based on dynamic modeling of network behavior. The anomaly detection module 34 may be used to identify observations which differ from other examples in a dataset. For example, if a training set of example data with known outlier labels exists, supervised anomaly detection techniques may be used. Supervised anomaly detection techniques utilize data sets that have been labeled as “normal” and “abnormal” and train a classifier.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the known teachings of Yadav with regards to the training AI algorithms to monitor for patterns of behaviors to the method of Challita in view of Agrawal in order to provide comprehensive and pervasive information about behavior over time and to identify activity potentially indicative of malicious behavior(Yadav ¶0017-0019) . Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Agrawal et al. (US PGPub No. 20220294715-A1) and Ross et al. (US PGPub No. 20170230477-A1) . With respect to claims 10, the combination of Challita in view of Agrawal teaches the method of claim 1 (see rejection of claim 1 above) but does not disclose wherein the instructions configured to determine one or more behaviors of a computing system that occur in a presence of malware software are further configured to analyze, using Machine Learning (ML), the one or more behaviors based on changes to at least one of (i) hardware and/or software configuration within the computing system, (ii) service packs installed on the computing system, and (iii) operating system revisions. However, Ross teaches wherein the instructions configured to determine one or more behaviors of a computing system that occur in a presence of malware software are further configured to analyze, using Machine Learning (ML), the one or more behaviors based on changes to at least one of (i) hardware and/or software configuration within the computing system, (ii) service packs installed on the computing system, and (iii) operating system revisions. (¶0019: Interpretation of statistical profiles and searching for correlation and temporal patterns in a series of network traffic events is implemented in some embodiments via machine learning methods. Using a recurrent neural network structure as a means of interpreting the network traffic can identify temporal patterns based on work shifts, corporate policies, scheduled activities, and other sources of regularity in the timing of network traffic. Using generative models such as Boltzmann machines to classify high level features of network traffic allows comparing observed patterns to the expected probability distribution generated by the Boltzmann machine. Examples of such high level concepts for explaining a new cause could include installation of new software (changes one software configuration within computing system). , changes in user habits and behavior, reassignment of resources within an organization, an attempted attack of network attached resources, or any other cause which might be identified by identifying temporal and spatial patterns in network traffic.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the known teachings of Ross with regards to Machine Leaning detecting changes to the method of Challita in view of Agrawal in order to enable detecting anomalous behaviors (Ross ¶0004) . Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Antoine et al. (US PGPub No. 20200356882-A1 ) and Agrawal et al. (US PGPub No. 20220294715-A1). With respect to claim 16, Challita teaches a computer program product comprising: a non-transitory computer-readable medium comprising: a first set of codes for causing a computer to determine behaviors of a computing system that occur in a presence of malware software, wherein the malware software is a ransomware software, (Abstract: An anti-ransomware system for a computer system has a deception component comprising a decoy module configured to place decoy segments within one or more file systems, a detection component comprising a behavioral analysis module configured to analyze the behavior of a suspected ransomware, and a response component.). wherein the behaviors include events or configurations that occur prior to an encryption of files stored in the computing system by the ransomware software in preparation for self-encryption of files including one or more of (i) disk input/output calls, (ii) memory utilization, (iii) processing unit utilization, (iv) types of calls made to operating system, (v) ports and protocols used for calls, and (vi) attempts to escalate access privileges; (¶0034-0036: With reference to Figure 1, the software agent comprises three major components, a deception component 2, a detection component 4, and a response component 6. The deception component contains a decoy component 10, which comprises files and/or folders that are placed strategically throughout the computer storage, and which may be periodically updated to update a time stamp or show recent activity. As soon as certain actions are taken on the decoys, such as encryption, detection, writing or editing, the detection component is notified. The goal of decoys is to detect ransomware encryption operations, and slow down the ransomware from achieving its objectives. The purposes of the decoys, without limitation, may comprise i) alerting about ransomware-like behavior, ii) alerting about “snooping” on the computer, iii) potentially storing anti-malware components disguised as decoys, iv) slowing down the encryption process, yielding additional response time, v) deterring attackers, vi) allowing additional opportunities to recover the key, or learn how to recover files.). a second set of codes for causing a computer to train, one or more Artificial Intelligence (AI) algorithms, to (i) monitor for the behaviors within a specified computing system, and (ii) in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds an acceptable baseline level for the at least one of the one or more behaviors, perform one or more actions to mitigate or eliminate a threat posed by the malware software; (¶0039-0040: The detection component also has a machine learning component 22 and a behavioral analysis component 24. The machine-learning component determines a baseline of machine behavioral, for a particular machine, to be established. As a pattern of massive change of individual files is potentially indicative of ransomware, as these actions are similar to actions habitually taken by ransomware once it starts operating, if files are changed massively (beyond a predetermined threshold) within a short time, the machine-learning component 22 is consulted. The component 22 determines a baseline for different files in different location, as to normal usage, to provide a baseline for benign, normal user activity. The system must learn to identify them to avoid taking action when these benign activities are undertaken. Through machine learning, the system determines normal use thresholds for file changes and stores these thresholds for future reference. The machine learning observes the normal processes of the machine, including behavior that results in large changes at one time to particular files, such as compressing or encrypting files within normal use of the computer, that weren't previously encrypted or representing user content. In an embodiment, once a file change activity exceeds a threshold, the system stops monitoring and takes action by notifying the response component 6.). and a third set of codes for causing a computer to monitor, by the one or more AL algorithms, for the behaviors within the specified computing system, and a fourth set of codes for causing a computer to, in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, wherein the acceptable baseline level is dynamically assigned based at least one of current (i) malware threat levels and (ii) utilization of the specified computing system. (¶0042: Monitoring for clustering detects rapid file manipulation or conversion activity of a process. Rapid file activity generally means many file changes occur in short duration of time. The threshold is determined by the machine learning observing normal usage for a period of time (1 day or 1 week) (utilization of specified computing system) based on the fact of ransomware being unlikely to strike within that early learning period.). Challita does not disclose: a computer program product comprising: a non-transitory computer-readable medium comprising: a first set of codes a second set of codes for causing a computer and a third set of codes for causing a computer to monitor, by the one or more AL algorithms, for the behaviors within the specified computing system, and a fourth set of codes for causing a computer to, in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, However, Antoine teaches a computer program product comprising: a non-transitory computer-readable medium comprising: a first set of codes (¶0018: A computer program product comprising a non-transitory computer-readable medium defines second embodiments of the invention. The computer-readable medium includes a first set of codes for causing a computer). a second set of codes for causing a computer (¶0018: The computer-readable medium additionally includes a second set of codes for causing a computer). a third set of codes for causing a computer (¶0018: In addition, the computer-readable medium includes a third set of codes for causing a computer). a fourth set of codes for causing a computer (¶0018: a fourth set of codes for causing a computer to determine which of the one or more resource interaction). The prior art showed a method of monitoring and using machine learning to detect for the presence of malware software within a system. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to substitute a set of code taught in Antoine for Challita’ s system for the predictable result of monitoring and using machine learning to detect for the presence of malware software within a system. Challita in view of Antoine does not disclose: and a third set of codes for causing a computer to monitor, by the one or more AL algorithms, for the behaviors within the specified computing system, and a fourth set of codes for causing a computer to, in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform, by the one or more AI algorithms, one or more actions to mitigate or eliminate the threat posed by the malware software, However, Agrawal and a third set of codes for causing a computer to monitor, by the one or more AL algorithms, for the behaviors within the specified computing system, and a fourth set of codes for causing a computer to, (¶0124-0125: As seen in Figure 7, at block 705 of method 700, an edge device or IoT device receives a trained machine learning model. At 710, the device stores the trained machine learning model in memory. The received trained machine learning model may be a retrained machine learning model, which may replace a previously received trained machine learning model in some instances. At block 715, processing logic monitors network traffic of the IoT/edge device. At block 720, processing logic inputs network communication data (network traffic data) into a trained machine learning model trained to process network communication data and output an indication as to whether or not an anomaly is detected. The machine learning model may also be trained to output a severity level of detected anomalous activity.). in response to detecting at least one of the one or more behaviors and determining that the at least one of the one or more behaviors exceeds the acceptable baseline level for the at least one of the one or more behaviors, perform, by the one or more AI algorithms, (¶0090-0091: As seen in Figure 3, in various implementations, anomaly determiner 373 can make the severity determination by comparing the determined severity value for the detected anomaly to a threshold. If the severity value satisfies the threshold (e.g., meets or exceeds a threshold value associated with high severity anomalies), anomaly determiner 373 can determine that the anomaly is high severity and invoke anomaly detection operation module 374 to take appropriate action for high severity anomalies as described below.). one or more actions to mitigate or eliminate the threat posed by the malware software, (¶0126-0127: At block 730, processing logic determines whether an anomaly is detected. If so, the method continues to block 735. Otherwise, the method proceeds to block 740. At block 735, processing logic performs a remedial action in view of the detected anomalous activity. This may include performing an anomaly detection operation for the device, as discussed in detail above. In some embodiments, the action performed is based on a detected severity level of the anomalous activity.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Agrawal with regards to training, one or more Artificial Intelligence and a second computing platform to the method of Challita in view of Antoine in order to protect the device and its associated against unauthorized access and the system can more readily identify anomalies in behaviors (Agrawal : ¶0014 & ¶0061). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Challita et al. (US PGPub No. 20180248896-A1 ) in view of Antoine et al. (US PGPub No. 20200356882-A1 ), Agrawal et al. (US PGPub No. 20220294715-A1), and Rao (US PGPub No. 20160203221-A1) . With respect to claim 17, a combination of Challita in view of Antoine and Agrawal teaches the method of claim 16 (see rejection of claim 16 above) but does not disclose wherein the sets of codes are operating system-agnostic. However ,Rao teaches wherein the sets of codes are operating system-agnostic. (¶0211: The machine learning model framework is kept system-agnostic is to keep t
Read full office action

Prosecution Timeline

Jun 10, 2022
Application Filed
May 15, 2024
Non-Final Rejection — §103
Aug 16, 2024
Response Filed
Sep 25, 2024
Final Rejection — §103
Dec 23, 2024
Request for Continued Examination
Jan 08, 2025
Response after Non-Final Action
Mar 04, 2025
Non-Final Rejection — §103
Jun 06, 2025
Response Filed
Aug 26, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506662
SERVICE PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12505223
System & Method for Detecting Vulnerabilities in Cloud-Native Web Applications
2y 5m to grant Granted Dec 23, 2025
Patent 12491837
ELECTRONIC SIGNAL BASED AUTHENTICATION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Dec 09, 2025
Patent 12411931
FUEL DISPENSER AUTHORIZATION AND CONTROL
2y 5m to grant Granted Sep 09, 2025
Patent 12399979
PROVISIONING A SECURITY COMPONENT FROM A CLOUD HOST TO A GUEST VIRTUAL RESOURCE UNIT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month