Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,376

TECHNIQUES FOR DETECTING CYBERSECURITY EVENTS BASED ON MULTIPLE SOURCES

Final Rejection §101§103
Filed
Jul 28, 2023
Examiner
ABYANEH, ALI S
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Wiz Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
485 granted / 623 resolved
+19.8% vs TC avg
Strong +56% interview lift
Without
With
+55.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
646
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 623 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-19 are pending. Information Disclosure Statement PTO-1449 The Information Disclosure Statement submitted by applicant on 10/17/2025, 12/31/2025 and 01/27/2026 have been considered. Please see attached PTO-1449. Response to Argument Applicant's amendments/arguments filed on 12-31-2025 have been fully considered. With respect to rejection of claims under 35 USC 101 as being directed to abstract idea applicant argues that in claim 2-9 and 12-19 “no analysis is actually provide. For at least this reason, the office action is incomplete”. In response to the Applicant argument regarding the rejection of claims 2-9 and 12-19 that “no analysis is actually provide. For at least this reason, the office action is incomplete”, the dependent claims have been reviewed and determined to be ineligible for the same reasons set forth with respect to the independent claims. The additional limitations of the dependent claims such as , receiving data, storing data, applying policy, detecting an event and cybersecurity object, generating alert, initiating inspection, refine or further describe the abstract idea and do not integrate the judicial exception into the practical application nor do they improve the functioning of a computer. Applicant argues that “Here, the claim recites initiating a mitigation action in a computing environment, which is something that is not performed in the human mind. This is something performed specifically in computing environments, and which the human mind is incapable of performing at all”. Examiner respectful disagrees. Initiating a mitigation action in a computing environment is reasonably analogous to action that can be performed by human. For example, a human network administrator upon reviewing a log on a piece of paper, can identify unauthorized malicious access to a system or computer, and take corrective action such as blocking the access to malicious source or notifying effected entities. Applicant argues that “Claim 7 goes on to articulate types of mitigation actions, such as sandboxing the resource on which the cybersecurity event is detected, revoking network access to the resource, revoking network access from the resource, etc., all of which cannot be practically performed in the human mind”. Examiner respectfully disagrees. Clearly, for example “generating an alert” or “generating a notification” as recited in claim 7, could be performed in human mind. As stated above, a human administrator could review a log and identify malicious access to a computer and notify (generating an alert or notification) to effected entities. With respect to rejections of claim under 35 USC 103, applicant argues that “Mazumder teaches a first protectable entity generating a first log, and a second protectable entity generating a second log. This is not what the claim recites. The claim recites a first cybersecurity source and a second cybersecurity source, which both generate data based on the same resource. There are no two resources in the claim. In fact, as best understood, Mazumder has a single protectable entity, generating information only about itself. The second protectible entity of Mazumder does not generate any data about the first protectible entity”. Examiner respectfully disagrees. It is noted that in the process of claim examination claims are given their broadest reasonable interpretation. Also, while the claims are examined in light of the specification, the limitation are not read to the claim from the specification. Here, applicant’s claim does not place any specific limitation on the claimed “resource”. Therefore the limitation has been interpreted broadly. Mazumder discloses a system 100 comprising multiple entities (first, second, Mth protectable entities) operate withing a common computing environment (paragraph [0022]). Each of the entities generate data (logs and events) reflecting operations occurring within the same system 100 (paragraph [0023]). Accordingly, although data is collected from different protectable entities, the collected data is generated based on operation of the system as a whole. Therefore, the system 100 constitutes the claimed resource, and the first and second log events by the first and second protectable entities are based on the same resource within the computing environment. Applicant argues that “It is clear that what Ferragut teaches is that a first anomaly score is generated based on the first data source, and a second anomaly score is generated based on the second data source. The data sources are not used together, i.e., detection is not performed based on both, as recited in the claim, but rather each of them individually entirely detects the anomaly, and therefore they are redundant to each other. By contrast, the instant claim recites that the cybersecurity event is detected based on data from both the first cybersecurity source and the second cybersecurity source”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Ferragut was applied for the limitation of “…the second cyber security source has a source type which is different from a source type of the first cybersecurity source”. Ferragut (paragraph [0027]) discloses, receiving a first and second log files from a first and second data source, respectively, wherein the second data source is different type than the first data source, which clearly reads on the limitation of the claim. Claim Rejections - 35 USC § 101 835 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims when analyzed under 2019 Revised Patent Subject Matter Eligibility Guidance, are directed to abstract idea. Claim 1 for example, recites a method and, therefore, is a process. The claim recites the limitation of “receiving data from a first cyber security source…[data generated] based on a resource…receiving data from a second cybersecurity source…[data generated] based on the resource…detecting a cybersecurity event on the resource based on data received…and initiating action in the computing environment…in response to detecting the cybersecurity event”. These limitations, under broadest reasonable interpretation are directed performance of the limitation in a human mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the claim encompasses a human (network administrator) simply receiving on a piece of paper data generated based on resources deployed in computing environment, from a first and second source, identifying/detecting anomaly event/cybersecurity event by analyzing or looking at the received data on the piece of paper, and initiating a mitigation in the computing system or environment in response to detecting cybersecurity events. Thus, the claim recites a mental process when analyzed under step 2A prong 1. Claim is further analyzed in step 2A prong 2, to evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by identifying whether there are any additional elements recited in the claim beyond the judicial exception, and evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. However, each of the remaining limitation appears to be generic computer functions which do not constitute meaningful limitations that would amount to significantly more than the abstract idea. The combination of these additional element is no more than generic computer functions. Thus, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea. Claim is additionally analyzed under Step 2B to evaluates whether the claim as a whole amount to significantly more than the recited exception, whether any additional element, or combination of additional elements, adds an inventive concept to the claim. When claims evaluated under step 2B, it is no more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication anything other than a generic computer component. The mere “…“receiving data from a first cyber security source…receiving data from a second cybersecurity source…detecting a cybersecurity event on the resource based on data received…and initiating action…” is a well-understood, routing and conventional function when it is claimed in a merely generic manner as it is here. Independent claims 10 and 11 include limitations similar to the limitations of claim 1 and are rejected under 35 U.S.C. 101 as being directed to abstract idea for the same reasons discussed above with respect to claim 1. Dependent claims 2-9 and 12-19 do not cure the deficiency of the independent claims and are directed to abstract idea when analyzed under 2019 Revised Patent Subject Matter Eligibility Guidance. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-7, 9-13, 15-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mazumder et al. (US Publication No. 2023/0102103), hereinafter Mazumder in view of Ferragut et al. (US Patent/ Publication No. 2015/0161394), hereinafter Ferragut, further in view of Johnson et al. (US Publication No. 2022/0124111), hereinafter Johnson.. As per claims 1, 10 and 11, Mazumder discloses a method for detecting a cybersecurity event based on multiple cybersecurity data sources, comprising: receiving data from a first cybersecurity source (paragraph [0024], computing system 106 is configured to receive the logs and event 104 A from the protectable entity 102), the first cybersecurity source configured to generate data based on a resource deployed in a computing environment (paragraph [0023], the first protectable entity 102A generates first logs and event 104A, a log entry may indicate an action that is performed on the protectable entity or by the protectable entity. For example, the log entry may indicate a request that is received by the protectable entity, data accessed by the protectable entity in response to the request, and/or an operation performed on the data by the protectable entity. An event that is generated by a protectable entity indicates an occurrence that is encountered by the protectable entity); receiving data from a second cybersecurity source (paragraph [0024], computing system 106 is configured to receive the logs and events 104A-104M (104B)), the second cybersecurity source configured to generate data based on the resource deployed in the computing environment (paragraph [0023], a second protectable entity 102B is shown to generate second logs and events 104B. . For example, the log entry may indicate a request that is received by the protectable entity, data accessed by the protectable entity in response to the request, and/or an operation performed on the data by the protectable entity. An event that is generated by a protectable entity indicates an occurrence that is encountered by the protectable entity); detecting a cybersecurity event on the resource based on data received from the first cybersecurity source and data received from the second cybersecurity source (paragraph [0025], “The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats”); and initiating a mitigation action for the resource in response to detecting the cybersecurity event (paragraph [0020], “techniques may prevent the negative impacts of the potential security threats from occurring in which case the amount of time and/or assets consumed to respond to the negative impacts may be avoided”). Mazumder does not explicitly disclose, wherein the second cybersecurity source has a source type which is different from a source type of the first cybersecurity source. However, in an analogous art, Ferragut discloses receiving a first and second log files from a first and second data source, respectively, wherein the second data source is different type than the first data source (paragraph [0027]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mazumder with Ferragut. This would have been obvious because one of ordinary skill in the art would have been motivated to provide comparability of disparate sources of data, and detect atypical traffic pattern. While Mazumder disclose initiating mitigation action (as shown above), Mazumder does not explicitly disclose, initiating mitigation action in the computing environment. However, in an analogous art, Johnson discloses, in initiating mitigation action in the computing environment when cybersecurity event is detected (paragraph [0031], [0060], “Mitigation decision engine 210 can also decide what form one or more mitigation actions should take. For example, mitigation decision engine 210 might suspend access to an electronic or physical resource, implement an IP-blocking or traffic throttling scheme, increase logging for a particular system or group of systems, generate an alert for human review, or take any other number of actions”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mazuma and Ferragut with Johnson. This would have been obvious because one of ordinary skill in the art would have been motivated to do so in order to achieve the predictable result of reducing the likelihood of impact of an anomaly or a threat on a computer system. As per claims 2 and 12, Ferragut furthermore discloses receiving data from a plurality of cybersecurity sources, wherein at least a first cybersecurity source is of a first type, and a second cybersecurity source is of a second type which is different from the first type (paragraph [0027], the second data source is different type than the first data source), the motivation is similar to the motivation provided in claim 1. As per claims 3 and 13, Mazumder furthermore discloses storing the received data in a graph database, the graph database including a security graph having stored therein a representation of the computing environment (paragraph [0054], “the association graph logic 312 generates the association graph based on computer network information 326, which indicates the requests that are received in the computer network, the data that are accessed in response to the requests, and the operations that are performed on the data. In accordance with this implementation, the association graph logic 312 may generate the association graph information 328 to describe the association graph. For instance, the association graph information 328 may indicate the graph nodes that are included in the association graph and correlations among the graph nodes”). As per claim 5 and 15, Mazumder furthermore discloses detecting an event in the data received from the first cybersecurity source; and detecting a cybersecurity object in the data received from the second cybersecurity source (paragraph [0023], a first protectable entity 102 A generates firs logs and events 104 A; and a second protectable entity 102B generate a second logs and events 104B, the log includes multiple log entries such that each log entry indicates an action that is performed with regard to the protectable entity. An event indicates an occurrence that is encountered by the protectable entity; paragraph [0024], system 106 receives the logs (event) and events A from the first protectable entity 102A, as such ,receives and detects an event in the logs received from protectable entity 102A. System 106 also receives a second logs and event (cybersecurity object) from protectable entity 102B, as such receives and detects a cybersecurity object in the even). As per claim 6 and 16, Mazumder furthermore discloses, wherein the cybersecurity object is any one of: a file, a file system, a hash, a password, a certificate, an encryption key, a malware object, and a combination thereof (paragraph [0024], “an event maybe in the form of a security alert, corresponding to malware object). As per claim 7 and 17, Mazumder furthermore discloses, wherein the mitigation action includes any one of: sandboxing the resource on which the cybersecurity event is detected, revoking network access to the resource, revoking network access from the resource, generating an alert, generating a severity score for an alert, generating a notification, initiating inspection of a resource, and any combination thereof (paragraph [0048[, each score indicates a likelihood of the respective pattern to indicate a security threat, corresponding to severity score). As per claim 9 and 19, Mazumder furthermore discloses, wherein the received data includes any one of: an event record, a file, a hash, an identifier of a resource, an identifier of a principal, a timestamp, and any combination thereof (paragraph [023], event). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mazumder in view of Ferragut and Johnson, further in view of Pitts (US Publication No. 2004/0054681), hereinafter Pitts. As per claims 4 and 14,Mazumder discloses applying a policy to data received from the first cybersecurity source(paragraph [0025], “The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats”). Mazumder as modified does not explicitly disclose, but in an analogous art, Pitt discloses, generating an instruction to receive data from the second cybersecurity source in response to triggering a rule of the policy (claim 5, “the domain manager that the request from the first digital computer for access to the file at the second digital computer must traverse has received from the second digital computer policy data specifying how access to files stored in the local domain tree of the second digital computer is to be administered; …domain manager forwarding onto the second digital computer the request by the first digital computer for access to the file stored at the second digital computer only if the policy data received by the domain manager permits access by the first digital computer to the file stored at the second digital computer”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the modified and Mazumder with Pitt. This would have been obvious because one of ordinary skill in the art would have been motivated to facilitate one networked digital computer’s access to a file that is stored at another networked digital computer. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mazumder, Ferragut and Johnson, further in view of et al. (US Publication No. 2023/0102103), hereinafter Mazumder in view of Kramer et al. (US Publication No. 2007/0006304) , hereinafter Kramer. As per claim 8 and 18, Mazumder as modified does not explicitly discloses, but in an analogous art Kramer discloses, initiating inspection of the resource in response to detecting the cybersecurity event (paragraph [0039], “should the receiving device determine that any of its resources may have been compromised as a result of the reported event, then the method 400B continues at process block 424 to initiate a targeted scan of the resources that were determined to be at risk of having been compromised”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the modified Mazumder with Kramer. This would have been obvious because one of ordinary skill in the art would have been motivated to optimize malware recovery processes. References Cited, Not Used The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Vashisht et al. (US Publication No.2019/0207966) discloses, a method that, depending on the embodiment, parses, formats, stores, manages, updates, analyzes, retrieves, and/or distributes cybersecurity intelligence maintained within a global data store to enhance cyber-attack detection and response. Crabtree et al. (US Publication No.2017/0126712) discloses, system and method for: monitoring cybersecurity related data from multiple sources and traffic on a client network, analyzing the retrieved data for baseline pattern determination and the data for anomalous occurrences, performing predictive simulation transformations on data provided by other modules of the platform and providing results as needed, and formatting data to maximize impact of included information and data. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ali Abyaneh whose telephone number is (571) 272-7961. The examiner can normally be reached on Monday-Friday from (8:00-5:00). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached on (571) 270-5143. can be reached on (571) 272-4063. The fax phone numbers for the organization where this application or proceeding is assigned as (571) 273-8300 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /ALI S ABYANEH/Primary Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103
Dec 31, 2025
Response Filed
Mar 23, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603868
Endpoint Data Loss Prevention
2y 5m to grant Granted Apr 14, 2026
Patent 12579259
SYSTEMS AND METHODS FOR INTELLIGENT CYBERSECURITY ALERT SIMILARITY DETECTION AND CYBERSECURITY ALERT HANDLING
2y 5m to grant Granted Mar 17, 2026
Patent 12574374
PROVIDING ACCESS CONTROL AND IDENTITY VERIFICATION FOR COMMUNICATIONS WHEN INITIATING A COMMUNICATION TO AN ENTITY TO BE VERIFIED
2y 5m to grant Granted Mar 10, 2026
Patent 12561465
VIRTUAL REPRESENTATION OF INDIVIDUAL IN COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12556553
NETWORK SECURITY AND RELATED APPARATUSES, METHODS, AND SECURITY SYSTEMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+55.6%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 623 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month