Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
In the remarks filed on 02/17/2026. The applicant amended claims 1,11, and 20 are amended. No claims were added.
With respect to claim objections:
Applicant’ claim amendments and remarks filed on 02/17/2026 have been fully considered and does not overcome the claim objections on “honeypot environment” and “honeypot trap environment” inconsistency and overcome other claim objection as presented in the non-final office action filed 11/21/2025.
With respect to 35 U.S.C. §102 and 103 rejections:
Applicant's arguments filed on 02/17/2026 have been received and entered.
Applicant's arguments with respect to the newly amended independent claims, see Applicant Arguments 8-13, with respect to the rejection (s) of independent claims 1,11 and 20 have been fully considered.
Applicant argues that Lin (US 11777988 B1) does not teach (i) identify “specific combinations of attributes” (ii) associate such combinations with likelihood of unauthorized intrusion attempt, and (iii) assign a “risk score” reflecting likelihood. Applicant further argues that Lins’s Poisson modeling and “level rankings” relate only to whether an individual connection event is unusually large/abnormal. Examiner understands the applicant’s perspective; however, the argument are not persuasive. Under BRI and in light of specification, the claimed specific combinations of attributes includes patterns or relationships among honeypot activity features derived from captured activity data. The instant specification itself describes that intrusion analysis involves entities, events and relationships between them and patterns sequences, and combinations of attributes representing intrusion behavior, [0033-0034]. The specification further explains that a security indication may include “any combination of attributes, any sequence of actions or events, any behavioral pattern, [0033]. Thus, under BRI the claimed “specific combinations of attributes” include detected patterns or conditions derived from honeypot activity data indicating suspicious behavior. Lin teaches analyzing honeypot activity statistical modeling to identify anomalous activity patterns. Lin discloses modeling honeypot connection activity using a Poisson distribution to evaluate observed connection activity related to expected behavior, see (col 4, lines 50-67, Col 5, lines 1-3), and producing anomalous events and level rankings that rank connection events based on abnormality, (col 5, lines 52-61). These rankings reflect degrees of abnormality of honeypot activity. In the context of honeypot intrusion monitoring, abnormal activity levels indicate a greater likelihood that activity corresponds to malicious or unauthorized behavior. Moreover, Lin’s anomaly ranking corresponds to the claimed risk score reflects the likelihood of unauthorized intrusion because the ranking quantifies the degree to which observed honeypot activity deviates from normal behavior. The claims does not require the score to be labeled “risk score” or to be computed using a particular formula. Under BRI a ranking or anomaly score that reflects that likelihood of suspicious activity reasonably corresponds to the claimed risk score. Thus, Lin teaches the claimed analytics identifying suspicious activity patterns and assigning scores indicating the likelihood of intrusion.
Applicant further argues that Aleks (US 20210037040 A1) does not disclose identifying such combinations based on machine learning classification. This augment is not persuasive. Aleks teaches applying machine learning techniques to generate detection rules based on malicious data patterns. Aleks further discloses a machine learning testing module including a detection rule generation system, [0052], machine learning algorithms that analyze malicious data to create detection rules, [0054], and further teaches generating an alert if trigger conditions are met, [0054]. Thus, Aleks teaches applying machine learning to identify malicious patterns in security data and generate rules responsive to those patterns which corresponds to the claimed machine learning classification of suspicious activity patterns. Combining Lin’s anomaly detection analytics with Aleks’s machine learning rule generation produce a system in which machine learning models operate on suspicious activity patterns derived from honeypot data reasonably tech
Applicant argues that Aleks (US 20210037040 A1) only generates detection rules for attack detection and does not disclose rules that map combinations of attributes to corresponding actions. This argument is not persuasive. Aleks explicitly teaches rules that trigger when certain conditions are satisfied and perform an action in response namely generating an alert when the rule conditions are met, [0054]. A rule that includes conditions describing detected malicious patterns, and resulting response (alert) corresponding to rule mapping detected attribute patterns to actions. Moreover, the specification states that rule includes a conditional part defining trigger conditions and an action part defining actions performed when the rule is triggered, [see instant application spec 0016]. The specification explicitly lists issuing alert as one example of such an action, [0017]. Therefore, Alek’s disclosing that detection rules generates alerts when trigger conditions are met directly to the claimed mapping of the detected attribute combination to actions. Moreover, the claims recite first and second actions, but does not require those actions to be different from each other. Multiple rules triggering the same actions (i.e., alerting) still fall within the scope of the claims.
Applicant argues that the combination of Lin (US 11777988 B1) and Aleks (US 20210037040 A1) presented in office action lacks motivation, and does not explain how Aleks would be applied to Lin’s honeypot data structures or how Alek’s detection rules would modify/replace Lin’s anomaly scoring. The argument is not persuasive. Both references address security analytics and automated detection of malicious activity. Lin analyzes honeypot activity to identify anomalous patterns that may include attacks. Aleks applies machine learning to generate detection rules for identifying malicious behavior based on observed data patterns. It would have been obvious to POSITA that combining Lin’s honeypot-based anomaly detection with Alek’s machine learning based rule generation would improve security monitoring by automatically generating detection rules based on suspicious honeypot activity patterns. Thus, the combination of Lin and Aleks address the same problem addressed by the application (i.e., automated identification of intrusion behavior and generation of respective security rules as a response. Hence, the rejection of claims 1, 11, and 20 under 103 Lin in view of Aleks is maintained.
Claim Objections
Regarding claims 1, 11, and 20, Claims 1, 11, and 20 are objected to because of the following informalities: In line 10, 13, and 12 respectively, “said honeypot environment” should read “said honeypot trap environment”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 10-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin (US 11777988 B1) in further view of Aleks (US 20210037040 A1).
Regarding claim 1, Lin teaches A computer-implemented method comprising:
automatically monitoring a honeypot trap environment, to capture activity data within said honeypot trap environment, wherein said honeypot trap environment comprises a plurality of software and hardware resources that are intended to attract attempts at unauthorized use of said honeypot trap environment (Lin, Fig 1A, an anomaly detection and remediation (ADR) server 125 monitors the honeypots of honeypots 110(1)-(N) in honeypot cloud network 105 receiving potentially malicious connections 120(1)-(N) from potentially malicious computing devices 115(1)-(N) (i.e. honeypot trap environment), (see Col 3 lines 37-45) Honeypots are physical or virtual computing systems implemented in a network as decoys to lure malicious actors (e.g., hackers) in an attempt to detect, deflect, and/or study hacking attempts, (see Col 1, lines 11-14) and Fig 1B, Honeypot data 130 (i.e. the captured activity data) includes data that is: (a) associated with one or more honeypots 110(1)-(N) and (b) indicative of the interaction between one or more malicious computing devices 115(1)-(N) (i.e., a plurality of software and hardware resources that are intended to attract attempts at unauthorized use) and one or more honeypots 110(1)-(N), over a period of time. Honeypot data 130 includes at least port connection data 160, application connection data 165, and connection timestamp data 170 (e.g., a timestamp for each port connection event and/or application connection event, for each honeypot), (see Col 4, lines 30-39)) [ Examiner interprets that monitoring the honey pots that are connected to interact with malicious computing devices to capture activity data( i.e., Honeypot data 130) consisting port connection, application connection etc. as automatically monitoring a honeypot trap environment, to capture activity data within said honeypot trap environment, wherein said honeypot trap environment comprises a plurality of software and hardware resources that are intended to attract attempts at unauthorized use of said honeypot trap environment];
automatically extracting, from said captured activity data, a plurality of attributes representing entities, events, and relations between said entities and events (Lin, Fig 1C, interaction manager 135 prepares and processes honeypot data for probabilistic statistical processing by extracting at least application events 175(1)-(N) and port events 180(1)-(N) (i.e., Entities and Events) from application connection data 165 and port connection data 160 (i.e. a plurality of attributes), respectively, based on the timestamps that are part of connection timestamp data 170. Interaction manager 135 maintains an association between ports and/or applications connected to and the duration and number of those connections to determine the average number of connections to a port and/or an application in a honeypot over a defined period of time (i.e. relations), (see Col 4, lines 45-59)) [Examiner interprets that extracting application events and port events from application connection data and port connection data (i.e., captured activity data) for further analysis as automatically extracting, from said captured activity data, a plurality of attributes representing entities, events, and relations between said entities and events];
automatically applying an analytics suite to identify specific combinations of said attributes of the plurality of attributes as representing a likelihood of being associated with an unauthorized intrusion attempt into said honeypot environment, (Lin, FIG. 2, discrete distribution engine 140 identifies and remediates anomalies in honeypots by applying one or more discrete probability functions (i.e. an analytics suite ) that inform the probability of particular honeypot connection, interaction, and/or activity events (i.e., specific combinations of said attributes). Poisson distribution is used to model the foregoing concerns related to connection to honeypots (e.g., connection data) to identify and remediate anomalous levels of honeypot activity, (see Col 4, lines 50-67 and Col 5 lines 1-3)) [ Examiner interprets that applying the discrete probability functions to identify the probability of particular honeypot connection, interaction, and/or activity events being anomalous activity as automatically applying an analytics suite to identify specific combinations of said attributes as representing a likelihood of being associated with an unauthorized intrusion attempt into said honeypot environment];
automatically assigning a risk score to respective ones of said specific combinations, wherein said risk score reflects said likelihood of being associated with the unauthorized intrusion attempt into said honeypot trap environment (Lin, Discrete distribution engine 140 generates one or more anomalous events 145(1)-(N) which include level rankings 225(1)-(N) (i.e., a risk score ) of particular connection events of concern—e.g., connection events (i.e., specific combinations) that have been deemed anomalous or abnormal because of a low probability (of their likelihood) output by probability distribution function 205. Level rankings 225(1)-(N) rank connection events based on whether the connection events are unusually large and thus abnormal (e.g., high or low), (see Col 5, lines 52-61)) [Examiner interprets that assigning level ranking which ranks connection events of connection data based on likelihood of being those events abnormal or anomalous as automatically assigning a risk score to each of said specific combinations, wherein said risk score reflect said likelihood of being associated with an unauthorized intrusion attempt into said honeypot environment]; and
Although, Lin conceptually teaches assigning or computing probabilities, identifying abnormality, and ranking events by level and scale, generating or modifying security operations/workflows/rules in response to detected anomalous honeypot activity including modifying intrusion station system rules and triggering alerts, Lin does not explicitly teach:
wherein the specific combinations are identified as representing the likelihood of being associated with the unauthorized intrusion attempt based on a machine learning classification; automatically generating, by a machine learning model, a first security rule mapping a first specific combination of attributes to a first action and a second security rule mapping a second specific combination of attributes to a second action, for an intrusion detection and prevention system, wherein the intrusion detection and prevention system performs the first action in response to the first specific combination of attributes being detected and performs the second action in response to the second specific combination of attributes being detected.
However, Aleks teaches:
wherein the specific combinations are identified as representing the likelihood of being associated with the unauthorized intrusion attempt based on a machine learning classification (Aleks, the machine learning testing module 105 can automatically generate new attack configurations and security configurations performing security tests. The machine learning testing module 105 uses machine learning to automate the generation of attacks and the rules for detecting such attacks. For example, the machine learning testing module 105 can use the generative capabilities of generative adversarial networks (GANs) to produce artificially generated attacks to evade detection and as a result, enhance the detection of the security system once additional policies are created to detect the artificial attacks, [0049] The detection rule generation system 320 of the machine learning testing module 105 can use a similar approach to generate security policies and controls to detect and prevent attacks (including new artificial attacks). In some implementations, the detection rule generation system 320 utilizes machine learning algorithms that parse malicious data 322 (for example, data received or collected by a target endpoint 120 as part of an attack) as well as detection methods (such as those described by frameworks such as MITRE and OPENC2) to create detection rules. The detection rule network 324 can include machine learning models using algorithms such as GANs, Ensemble Methods, and Regularization to transform malicious data into a rule for detecting the malicious data, [0054] attack data can be input into machine learning models of the detection rule generation system 320 to generate security rules for detecting attack techniques, [0055]) [Examiner interprets that system using machine learning models (GANs, ensemble methods) to analyze combination of malicious data features (i.e. attributes) collected from telemetry and endpoints logs, parsing malicious data, determining what patterns represent attacks and generating rules to detect that combinations of telemetry attributes are likely associated with attacks limitation above]];
automatically generating, by a machine learning model, a first security rule mapping a first specific combination of attributes to a first action and a second security rule mapping a second specific combination of attributes to a second action, for an intrusion detection and prevention system, wherein the intrusion detection and prevention system performs the first action in response to the first specific combination of attributes being detected and performs the second action in response to the second specific combination of attributes being detected (Aleks, The security stack 130 can include facilities for the prevention, detection, and mitigation of attacks, [0017] A STEM is a security information and event management system and may include a set of configuration rules to prevent and detect malicious actions at an endpoint, [0050] A STEM is a security information and event management system (i.e., the intrusion detection and prevention system) may include a set of configuration rules to prevent and detect malicious actions at an endpoint. The STEM may be a component of the security testing platform 100 or may be a separate system or platform that receives endpoint and network data and applies security controls (e.g., security rules) thereto. The particular rules implemented by the security control may be one aspect tested by the security testing platform 100, and the automated attacks may be used to test and attempt to exploit the current security controls. When exploited, the security controls may be modified to account for the successful attack, [0050] the detection rule generation system 320 (ML Classifier) utilizes machine learning algorithms that parse malicious data 322 (for example, data received or collected by a target endpoint 120 as part of an attack) as well as detection methods (such as those described by frameworks such as MITRE and OPENC2) to create detection rules (i.e., first and second security rules). The detection rule network 324 can include machine learning models using algorithms such as GANs, Ensemble Methods (i.e., classifier), and Regularization to transform malicious data into a rule for detecting the malicious data. ..malicious data 322 that bypasses detection by any means (for example malicious data from an artificially generated attack, an attack simulations, or a real attacks) can be used as training data for the detection rule network 324 to generate more finely tuned detection rules 326. For example, the gathered malicious data 322 can be used to create detection rules that evaluate data available to the security stack 130 (or a target endpoint 120) and generate an alert if trigger conditions are met, [0054] Then new security rules for attack generation are generated 430 based on the gathered telemetry data. For example, attack data can be input into machine learning models of the detection rule generation system 320 to generate security rules for detecting attack techniques. Similarly, the gathered telemetry data or output of the security controls in detecting the attack can be used to generate 440 new attack variations. For example, attack data can be input into machine learning models of the attack generation system 310 to generate new attack permutations. The new attacks can be run 450 from the security testing platform 100 are run on endpoints 120 (and the security stack 130 with updated detection rules). If the new attack evaded detection 460, the security rule generation model (for example, of the detection rule generation system 320) can be retrained 470 to generate updated detection rules, [0055]) [Examiner interprets that generating multiple detection rules by detection rule generation system using machine learning such as ensemble methods (i.e., classifiers) to take telemetry /malicious data (i.e., specific attribute combination), automatically generating detection rules (i.e., security rules) based on those data patterns, each ML generated corresponds to the particular attack technique/malicious data pattern (attack techniques, malicious data 322) and defining what action is taken when that pattern occurs as limitation above].
Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Lin to include a concept of wherein the specific combinations are identified as representing the likelihood of being associated with the unauthorized intrusion attempt based on a machine learning classification; automatically generating, by a machine learning model, a first security rule mapping a first specific combination of attributes to a first action and a second security rule mapping a second specific combination of attributes to a second action, for an intrusion detection and prevention system, wherein the intrusion detection and prevention system performs the first action in response to the first specific combination of attributes being detected and performs the second action in response to the second specific combination of attributes being detected as taught by Aleks for the purpose of the detection rule generation system 320 utilizing machine learning algorithms that parse malicious data 322 (for example, data received or collected by a target endpoint 120 as part of an attack) as well as detection methods to create detection rules and evaluating data available to the security stack 130 (or a target endpoint 120) and generating an alert if trigger conditions are met [Aleks:0054].
Regarding Claim 2, Lin and Aleks teaches the computer-implemented method of claim 1, wherein said honeypot trap environment comprises a plurality of software and hardware resources that are intended to attract attempts at unauthorized use of said honeypot trap environment (Lin, Honeypots are physical or virtual computing systems implemented in a network as decoys to lure malicious actors (e.g., hackers) in an attempt to detect, deflect, and/or study hacking attempts, (see Col 1, lines 11-14) Fig 1A, an anomaly detection and remediation (ADR) server 125 monitors the honeypots of honeypots 110(1)-(N) in honeypot cloud network 105 receiving potentially malicious connections 120(1)-(N) from potentially malicious computing devices 115(1)-(N) (i.e. honeypot trap environment), (see Col 3 lines 37-45) and Fig 1B, Honeypot data 130 (i.e. the captured activity data) includes data that is: (a) associated with one or more honeypots 110(1)-(N) and (b) indicative of the interaction between one or more malicious computing devices 115(1)-(N) (i.e., a plurality of software and hardware resources that are intended to attract attempts at unauthorized use) and one or more honeypots 110(1)-(N), over a period of time,(see Col 4, lines 30-35)) [ Examiner interprets that honeypot network having physical or virtual computing systems and monitoring malicious computing devices as honeypot trap environment comprises a plurality of software and hardware resources that are intended to attract attempts at unauthorized use of said honeypot trap environment];
Regarding Claim 3, Lin and Aleks teaches the computer-implemented method of claim 1, wherein said entities are selected from the group consisting of processes, objects, artifacts, files, directories, database servers, database tables, database collections, registries, sockets, and network resources (Lin, Fig 1B, Honeypot data 130 (i.e. the captured activity data) includes data that is: (a) associated with one or more honeypots 110(1)-(N) and (b) indicative of the interaction between one or more malicious computing devices 115(1)-(N) and one or more honeypots 110(1)-(N), over a period of time. Honeypot data 130 includes at least port connection data 160, application connection data 165, and connection timestamp data 170 (e.g., a timestamp for each port connection event and/or application connection event, for each honeypot), (see Col 4, lines 30-39)) [Examiner interprets that collecting data from ports, applications as entities comprising network resources].
Regarding Claim 4, Lin and Aleks teaches the computer-implemented method of claim 1, wherein said events are selected from the group consisting of a system level, and application-level action that can be associated with one or more of said entities (Lin, Fig 1C, interaction manager 135 prepares and processes honeypot data for probabilistic statistical processing by extracting at least application events 175(1)-(N) and port events 180(1)-(N) (i.e., Entities and Events) from application connection data 165 and port connection data 160 (i.e. a plurality of attributes), respectively, based on the timestamps that are part of connection timestamp data 170. Interaction manager 135 maintains an association between ports and/or applications connected to and the duration and number of those connections to determine the average number of connections to a port and/or an application in a honeypot over a defined period of time (i.e., relations), (see Col 4, lines 45-59)) [ Examiner interprets that connection to applications and ports and extracting application and port level events from the connection data as application-level action].
Regarding Claim 5, Lin and Aleks teaches the computer-implemented method of claim 4, wherein said events are selected from the group consisting of: create directory, open file, read ('SELECT') from a database table, delete from a database table, stored procedure, modify data in a file, delete a file, copy data in a file, execute process, connect on a socket, accept connection on a socket, fork process, create thread, execute thread, start/stop thread, and send/receive data through socket or device (Lin, Fig 1B, Honeypot data 130 (i.e. the captured activity data) includes data that is: (a) associated with one or more honeypots 110(1)-(N) and (b) indicative of the interaction between one or more malicious computing devices 115(1)-(N) (i.e., a plurality of software and hardware resources that are intended to attract attempts at unauthorized use) and one or more honeypots 110(1)-(N), over a period of time. Honeypot data 130 includes at least port connection data 160, application connection data 165, and connection timestamp data 170 (e.g., a timestamp for each port connection event and/or application connection event, for each honeypot), (see Col 4, lines 30-39), Fig 1C, interaction manager 135 prepares and processes honeypot data for probabilistic statistical processing by extracting at least application events 175(1)-(N) and port events 180(1)-(N) (i.e., Entities and Events) from application connection data 165 and port connection data 160 (i.e. a plurality of attributes), respectively, based on the timestamps that are part of connection timestamp data 170. Interaction manager 135 maintains an association between ports and/or applications connected to and the duration and number of those connections to determine the average number of connections to a port and/or an application in a honeypot over a defined period of time (i.e., relations), (see Col 4, lines 45-59)) [ Examiner interprets connection to applications and ports and extracting application and port level events from the connection data as connect on a socket, accept connection on a socket].
Regarding Claim 6, Lin and Aleks teaches the computer-implemented method of claim 1, wherein said attributes comprise connection attributes selected from the group consisting of: User ID, source program, client Internet Protocol (IP) address, server IP, domain name, Uniform Resource Locater (URL), Uniform Resource Identifier (URI), Unique Identifier (UID), Media Access Control (MAC) address, DB (database) User, service name, client host, client operating system, user ID, port numbers and ranges, and protocol used (Lin, Fig 1B, Honeypot data 130 (i.e. the captured activity data) includes data that is: (a) associated with one or more honeypots 110(1)-(N) and (b) indicative of the interaction between one or more malicious computing devices 115(1)-(N) (i.e., a plurality of software and hardware resources that are intended to attract attempts at unauthorized use) and one or more honeypots 110(1)-(N), over a period of time. Honeypot data 130 includes at least port connection data 160, application connection data 165, and connection timestamp data 170 (e.g., a timestamp for each port connection event and/or application connection event, for each honeypot). Honeypot data 130 includes a history of connections to a given honeypot. Other types of event, activity, and/or interaction data items other than port connection data 160, application connection data 165, and timestamp data 170 are contemplated (e.g., exfiltration history data, authentication and credential data, and the like)., (see Col 4, lines 30-44)) [Examiner interprets that it is well known in the art that port and application connection data, and connection timestamp data which contains port numbers and application names, credential and authentication data contains User ID, Unique Identifier (UID)].
Regarding Claim 7, Lin and Aleks teaches the computer-implemented method of claim 1, wherein said attributes comprise activity attributes selected from the group consisting of: commands, SQL commands, objects accessed, number and frequency of probe requests within a specified time period, time of day of probe requests, data patterns, unique strings, Regex, keywords, specific syntax, login failures. authentication failures, and errors (Lin, ADR server 125 receives a honeypot dataset (e.g., honeypot data 130) associated with a honeypot network. Then, Interaction manager 135 determines a representative usage value from the honeypot dataset (e.g., the average number of connections to an application or to a port over a defined time period that can be scaled up or down to account for a sudden increase in connections), (See Col 6, lines 24-31)) [ Examiner interprets that determining the average number of connections over defined period of time as number and frequency of probe requests within a specified time period].
Regarding Claim 10, Lin and Aleks teaches the computer-implemented method of claim 1, wherein first and second actions are distinct are selected from the group consisting of: halting an involved process, issuing an alert, moving an involved process to a sandbox for further evaluation, dropping an on-going network session, halting an on-going disk operation, blocking one or more users or activities, quarantining one or more nodes or sections of a network, and adding users and other entities to a blocklist (Lin, security and remediation operations (i.e., first and second actions) include, modifying intrusion detection system rules, configuring and/or updating the type of detection messages that trigger alerts (e.g., in a security information and event management (SIEM) system), preventing access or exfiltration (i.e., distinct first and second actions) , (Col 6,lines16-21) and orchestrating security workflows for vulnerability assessment, vulnerability validation, vulnerability remediation, alert generation upon incident detection, modification of alerts that fire for particular detection messages), (see Col 6, lines 38-49)) [ Examiner interprets that triggering alerts, preventing access or exfiltration, as issuing an alert, preventing access].
Regarding Claims 11 and 20, Claims 11 and 20 recite commensurate subject matter as claim 1. Therefore, they are rejected for the same reasons. Except additional elements:
Lin and Aleks:
A system (Lin, Fig 1A, an anomaly detection and remediation (ADR) server 125) comprising:
at least one hardware processor (Lin, fig 5, Processor 555); and
a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor (Lin, Fig 5, Memory 560, storage devices or mediums capable of storing data and/or other computer-readable instructions, (see Col 10, lines 66-67, Col 11, lines 1-9)) to:
Regarding Claim 12-17, and 19, Claims 12-17, and 19 recite commensurate subject matter as claim 2-7, and 10. Therefore, they are rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210067553 A1: “relates to computer security, and more particularly, to techniques for using honeypots to lure attackers and gather data about attack patterns on Infrastructure-as-a-Service (IaaS) instances. The gathered data may then be analyzed and used to proactively prevent such attacks”
US 20200186569 A1: “relates to cognitive computing system of the security rules management system ingests natural language content, from one or more corpora, describing features of security attacks, and ingests security event log data from a monitored computing environment”
US 20230370439 A1: “relates to the field of cybersecurity and observability, and is particularly pertinent to the use of a network of widely distributed sensor nodes to classify traffic and actions from both human and artificial agents and identify potential threats, broader health and utilization information and trends from network, security, observability, and application telemetry”
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIKSHYA POUDEL whose telephone number is (703)756-1540. The examiner can normally be reached 7:30 AM - 5PM Mon- Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.N.P./Examiner, Art Unit 2436
/SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436