Prosecution Insights
Last updated: April 19, 2026
Application No. 18/324,219

DATA SECURITY RISK POSTURE

Final Rejection §101§103
Filed
May 26, 2023
Examiner
VU, TAYLOR P
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
21 granted / 26 resolved
+22.8% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 07/28/2025 have been fully considered but they are not fully persuasive. Claim 1,11, and 19 have been amended. Claims 4, 14, and 22 have been cancelled. Claims 1-3, 5-13, and 15-21 are pending. On page 9 of the Remarks, the Applicant amended the claims with regards to the objections filed from the current office action. The examiner respectfully agrees to the amendments regarding the objections that have been made to the claims. Therefore, the claim objections have been withdrawn. On pages 9-15 of the Remarks, the Applicant contends the claims 1, 11, and 19 and their dependents under 35 USC 101 are directed to eligible subject matter. The examiner respectfully disagrees. In response, on pages 10-11 of the Remarks, to the applicant’s arguments that the claims are directed to a particular concrete solution and not directed to an abstract solution to a problem as in Electric Power Group. The claims recite generic data processing steps and conventional computer components (e.g., processors, machine readable media) that do not provide the required “inventive concept” under Alice/Mayo and the USPTO’s 2019 PEG. The examiner maintains that the Electric Power Group is appropriately relied upon here because the claims do no more than collect, analyze, and display information in general terms rather than claim a specific technical improvement. Further, in pages 10-12 of the Remarks, the applicant also contends that the problem of data loss prevention (DLP) system being rendered ineffective by generating hundreds of thousands of DLP incidents for sensitive assets in cloud infrastructures that overwhelm personnel. The examiner still maintains the claim, even with amendments, are directed to abstract idea. In SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161 (Fed. Cir. 2018) wherein InvestPic’ s patent describes and claims systems and methods for performing statistical analyses of investment information. The court ruled that the complexity and computational intensity of a statistical analysis method did not render it patent-eligible if it was directed to an abstract idea. Applying the Alice Corp. framework, the court found the claims were directed to the abstract idea of performing statistical analysis and lacked an inventive concept beyond the abstract mathematics and generic computer implementation. In response, in pages 12-15 of the Remarks, the applicant contends that the claims should be found eligible contends when analyzed for eligibility with the USPTO implementation of the Alice/Mayo Two-Part Test. The examiner respectfully disagrees. The examiner acknowledges the claims do fall within one of the four statutory categories of inventions (Step 1 of Subject Matter Eligibility Test) The applicant contends the claims are not directed to a judicial exception. The applicant also states that Step 2A, Prong One- The Office’s characterization of the claims as directed to an abstract of mental process is incorrect and lacks any supporting analysis. In Step 2A, Prong One, examiners evaluate whether the claim recites a judicial exception i.e., whether law of nature, natural phenomenon, or abstract is set forth or described in the claim. The claim’s core limitations: identifying sensitive assets, determining a data loss risk score for each asset based on multiple risk assessments (user based, cloud infrastructure, system configuration, policy compliance), aggregating/combining scores (including in transit score combining asset and requestor assessments), and surfacing/selecting incidents that meet criteria. These are processes of collecting information, performing calculations or scoring, and making a selection/notification decision. These operations fall squarely within the abstract idea group identified in Alice/Mayo and the PEG: (i) mathematical concepts/algorithms (scoring/combining numerical risk assessments); (ii) mental processes (evaluating risk and deciding which incidents to surface can be conceptualized as human cognitive steps); and (iii) certain types of business/organizational practices (prioritizing/triaging items). The representative’s characterization of the claim as a “particular concrete solution” reads limitations from the specification into the claim. The claim language itself does not recite any unconventional technique for obtaining measurements, a specific novel algorithm described at a technical level, or a particular technical configuration that departs from conventional computing or networking. For example, in Berkheimer v. HP Inc., No. 17-1437 (Fed. Cir. 2018) recites wherein digitally processing and archiving files in digital asset management system that parses files into multiple objects and tags the objects to create relationships between them, and these objects are analyzed and compared, manually or automatically, to archived objects to determine whether variations exist based on predetermined standards and rules (e.g., scoring according to assessments) in order to eliminate redundant storage of common text and graphical elements and improve system operating efficiency, reducing storage costs. The court ruled that the claims were ineligible under 35 USC 101 concluding that parsing and comparing data or collecting, organizing, comparing, and presenting data were directed to an abstract idea . Applicant contends, Step2A, Prong Two – The claims incorporate any alleged mental process into practical application in the area of data loss prevention. As stated previously, the examiner still maintains the claim, even with amendments, are directed to abstract idea. The claim does not recite any improvement to the underlying computer, network, or cloud infrastructure itself. There is no asserted improvement such as novel data structures, low level protocol, specialized hardware, or a specific technical method of detecting or measuring events in the cloud that materially improves performance, security, scalability, latency, or accuracy in a manner claim specified and beyond conventional design. The specification’s assertion that the solution improves DLP workflows is not reflected in the claim language as a technical improvement to computer functionality; the claims merely use a computer to perform routine information processing tasks. The recited “aggregating” or “generating an in transit data loss risk score” is described at a high level; there is no claim limitation identifying a concrete algorithmic or technical method for scoring or combining that is not a routine mathematical operation. Merely adding that the score is based on multiple “views” of risk does not impose a specific technical implementation or limitation that integrates the abstract idea into a practical application under the PEG. For example in, SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161 (Fed. Cir. 2018), InvestPic’ s patent describes and claims systems and methods for performing statistical analyses of investment information. The court ruled that the complexity and computational intensity of a statistical analysis method did not render it patent-eligible if it was directed to an abstract idea. Applying the Alice Corp. framework, the court found the claims were directed to the abstract idea of performing statistical analysis and lacked an inventive concept beyond the abstract mathematics and generic computer implementation. Therefore, based at least the above, paragraphs, Examiner respectfully maintains the USC 101 rejection. On pages 15-17 of the Remarks with regards to 35 USC 112 , Applicant’s arguments with regards to 35 USC 112(a) and 35 USC 112(b) were fully considered and persuasive. Therefore, the rejection have been withdrawn. On pages 17-21 of the Remarks, Applicant’s arguments and amendments with respect to rejection of independent claims 1, 11, and 19 under 35 USC 103 have been fully considered and are persuasive. Therefore, rejection have been withdrawn. However, upon further consideration, new grounds of rejection is made in view of newly found prior art. The office action has been updated reflecting the claims as currently presented. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3 and 5-10 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recites a method which appears to be a ‘process’, one of the four statutory categories of inventions (Step 1 of Subject Matter Eligibility Test) . However, the claim as a whole appears to not qualify for a streamlined analysis and thus a full eligibility analysis is necessary (Step 2A and Step 2B of the Subject Matter Eligibility Test). In Step 2A, Prong One, examiners evaluate whether the claim recites a judicial exception i.e., whether law of nature, natural phenomenon, or abstract is set forth or described in the claim. The claims recites the steps of: “… identifying a plurality of sensitive assets…” “…determining data loss risk score for each plurality of sensitive assets…” “…determining which of the plurality of sensitive assets have a data loss risk score that satisfies a set of one or more criteria…” “…surfacing those of the DLP incidents corresponding subset of the plurality of sensitive assets having data loss risk scores that satisfy the set of one or more criteria…” “…generating in-transit data loss risk score…” The steps of identifying, determining, and surfacing amount to an abstract idea which falls under a judicial exception (Step 2A Prong 1, of the Subject Matter Eligibility), The abstract idea falls in the category. The abstract idea falls in the category of a mental process, for example, evaluation, judgements, and opinions (see MPEP 2106.06). For example, the courts found that a claims to “collecting information, analyzing it, and displaying certain results of collection and analysis,” where the data analysis steps are recited at high level of generality, could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353‐54, 119 USPQ2d 1739, 1741‐42 (Fed. Cir. 2016). In Step 2A, Prong Two, examiners determine whether the claim as a whole integrates the judicial exception into practical application to disqualify the abstract idea as a judicial exception. However, the judicial exception found in claim 1 is not integrated into a practical application because the generically recited computer elements of an do not add any meaningful limitation to an abstract idea because they amount to simply implementing the abstract idea on a computer. The implementation of identifying, determining, and surfacing amount to enabling human decision making without any meaningful to improve the functioning of a computer or another technology without reference to what is well-understood, routine, and conventional activity. The claim do not include additional elements that are sufficient to amount significantly more than the judicial exception because simply appending well-understood, routine, and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. Thus, the analysis concludes is ineligible under 35 U.S.C. § 101 as it is directed to a judicial exception. Regarding claims 2-3 and 5-10: Claims 2-3 and 5-10 do not add apply additional elements than those already disclosed in claim 1, and merely adds further abstract ideas. Furthermore, none of the claims integrate the judicial exception into a practical application. Claims 11-13 and 15-18 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recites a non-transitory readable medium which appears to be a ‘machine’, one of the four statutory categories of inventions (Step 1 of Subject Matter Eligibility Test) . However, the claim as a whole appears to not qualify for a streamlined analysis and thus a full eligibility analysis is necessary (Step 2A and Step 2B of the Subject Matter Eligibility Test). In Step 2A, Prong One, examiners evaluate whether the claim recites a judicial exception i.e., whether law of nature, natural phenomenon, or abstract is set forth or described in the claim. The claims recites the steps of: “…determine a set of one or more cloud infrastructures hosting sensitive assets…” “…quantify holistic data loss risk for sensitive assets of an organization…” “…obtain a risk assessment of the cloud infrastructure…” “…obtain system configuration risk assessments for sensitive assets…” “…obtain user-based risk assessments for sensitive assets…” “…determine a baseline data loss risk score for each sensitive asset…” “…surface, to a security operation center…” “…generate an in-transit data loss risk score…” The steps of determining, quantifying, obtaining, and surfacing amount to an abstract idea which falls under a judicial exception (Step 2A Prong 1, of the Subject Matter Eligibility), The abstract idea falls in the category. The abstract idea falls in the category of a mental process, for example, evaluation, judgements, and opinions (see MPEP 2106.06). For example, the courts found that a claims to “collecting information, analyzing it, and displaying certain results of collection and analysis,” where the data analysis steps are recited at high level of generality, could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353‐54, 119 USPQ2d 1739, 1741‐42 (Fed. Cir. 2016). In Step 2A, Prong Two, examiners determine whether the claim as a whole integrates the judicial exception into practical application to disqualify the abstract idea as a judicial exception. However, the judicial exception found in claim 11 is not integrated into a practical application because the generically recited computer elements of an do not add any meaningful limitation to an abstract idea because they amount to simply implementing the abstract idea on a computer. The implementation of determining, obtaining, and surfacing amount to enabling human decision making without any meaningful to improve the functioning of a computer or another technology without reference to what is well-understood, routine, and conventional activity. The claim do not include additional elements that are sufficient to amount significantly more than the judicial exception because simply appending well-understood, routine, and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. Thus, the analysis concludes is ineligible under 35 U.S.C. § 101 as it is directed to a judicial exception. Regarding claims 12-13 and 15-18: Claims 12-13 and 15-18 do not add apply additional elements than those already disclosed in claim 11, and merely adds further abstract ideas. Furthermore, none of the claims integrate the judicial exception into a practical application. Claims 19-21 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recites an apparatus which appears to be a ‘machine’, one of the four statutory categories of inventions (Step 1 of Subject Matter Eligibility Test) . However, the claim as a whole appears to not qualify for a streamlined analysis and thus a full eligibility analysis is necessary (Step 2A and Step 2B of the Subject Matter Eligibility Test). In Step 2A, Prong One, examiners evaluate whether the claim recites a judicial exception i.e., whether law of nature, natural phenomenon, or abstract is set forth or described in the claim. The claims recites the steps of: “…determine a set of one or more cloud infrastructures hosting sensitive assets…” “…quantify holistic data loss risk for sensitive assets of an organization…” “…obtain a risk assessment of the cloud infrastructure…” “…obtain system configuration risk assessments for sensitive assets…” “…obtain user-based risk assessments for sensitive assets…” “…determine a baseline data loss risk score for each sensitive asset…” “…surface, to a security operation center…” “…generate an in-transit data loss risk score…” The steps of determining, quantifying, obtaining, and surfacing amount to an abstract idea which falls under a judicial exception (Step 2A Prong 1, of the Subject Matter Eligibility), The abstract idea falls in the category. The abstract idea falls in the category of a mental process, for example, evaluation, judgements, and opinions (see MPEP 2106.06). For example, the courts found that a claims to “collecting information, analyzing it, and displaying certain results of collection and analysis,” where the data analysis steps are recited at high level of generality, could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353‐54, 119 USPQ2d 1739, 1741‐42 (Fed. Cir. 2016). In Step 2A, Prong Two, examiners determine whether the claim as a whole integrates the judicial exception into practical application to disqualify the abstract idea as a judicial exception. However, the judicial exception found in claim 19 is not integrated into a practical application because the generically recited computer elements of an do not add any meaningful limitation to an abstract idea because they amount to simply implementing the abstract idea on a computer. The implementation of determining, obtaining, and surfacing amount to enabling human decision making without any meaningful to improve the functioning of a computer or another technology without reference to what is well-understood, routine, and conventional activity. The claim do not include additional elements that are sufficient to amount significantly more than the judicial exception because simply appending well-understood, routine, and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. Thus, the analysis concludes is ineligible under 35 U.S.C. § 101 as it is directed to a judicial exception. Regarding claims 20-21: Claims 20-21 do not add apply additional elements than those already disclosed in claim 19, and merely adds further abstract ideas. Furthermore, none of the claims integrate the judicial exception into a practical application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 5, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Newman et al. (US PGPub No.20180191771-A1) in view of Crabtree et al. (US PGPub No.20220377093-A1), Liu et al. (US-9807094-B1), and Watson et al. (US PGPub No. 20180248895-A1). With respect to claim 1, Newman teaches a method comprising: identifying a plurality of sensitive assets corresponding to data loss prevention (DLP) incidents, ( ¶0037: As shown in Figure 4, a system according to embodiment may receive communication and document metadata 402, 404 to correlate stored communications, documents, and non-document content in a multi-stage, correlated storage 442. Further inputs to the system may include audit activities 412, click traces 414, data loss prevention (DLP) hits 416. ¶0016: As used herein, contextual correlation (corresponding) refers to multi-stage evaluation and correlation of data such as communications, documents, and non-document content in light of associated metadata and activities. For example, deletion of documents in a particular location may be assessed for potential threat based on sensitive information contents of the documents, deleting person or entity, location of the deleting person or entity, etc. ). wherein the plurality of sensitive assets is hosted in a set of one or more cloud infrastructures; (¶0030: As seen in Figure 2A, In some examples, data to be analyzed, categorized, protected, and handled according to policies may come from a variety of sources such as a communications data store 202, a collaboration data store 204, and cloud storage 206.). determining a data loss risk score for each of the plurality of sensitive assets based, at least partly, on user-based risk assessment, cloud infrastructure risk assessment, system configuration risk assessment, and policy compliance risk assessment; and determining which of the plurality of sensitive assets have a data loss risk score that satisfies a set of one or more criteria; and surfacing those of the DLP incidents corresponding to a subset of the plurality of sensitive assets having data loss risk scores that satisfy the set of one or more criteria; based on detecting an access request for a first of the plurality of sensitive assets, determining a risk assessment of a requestor; generating an in-transit data loss risk score based, at least in part, on the data loss risk score of the first sensitive asset and the risk assessment of the requestor; and determining whether to surface a notification corresponding to the first sensitive asset based, at least in part, on the in-transit data loss risk score. Newman does not disclose: determining a data loss risk score for each of the plurality of sensitive assets based, at least partly, on user-based risk assessment, cloud infrastructure risk assessment, system configuration risk assessment, and policy compliance risk assessment; and determining which of the plurality of sensitive assets have a data loss risk score that satisfies a set of one or more criteria; and surfacing those of the DLP incidents corresponding to a subset of the plurality of sensitive assets having data loss risk scores that satisfy the set of one or more criteria. based on detecting an access request for a first of the plurality of sensitive assets, determining a risk assessment of a requestor; generating an in-transit data loss risk score based, at least in part, on the data loss risk score of the first sensitive asset and the risk assessment of the requestor; and determining whether to surface a notification corresponding to the first sensitive asset based, at least in part, on the in-transit data loss risk score. However, Crabtree teaches determining a data loss risk score for each of the plurality of sensitive assets based, at least partly, on user-based risk assessment, cloud infrastructure risk assessment, system configuration risk assessment, and policy compliance risk assessment; and (¶0095: In Figure 18, for comprehensive data loss prevention and compliance management, according to a preferred embodiment. According the embodiment , a risk analysis and scoring engine 1810 may be used to collect and analyze data from a plurality of input engines ,each of which collects data from a variety of sources and processes it to identify anomalies between observed and predicted behavior, indicating possible risk that may then be collated and scored to form an overall security risk analysis. A human activity monitoring engine 1801 may be used to monitor user behavior, a device activity monitoring engine 1803 may be used to monitor device-based behavior, a system activity monitoring engine 1802 may be used to monitor activity in any of a number of software or network systems, and an organization activity monitoring engine 1804 may be used to monitor broader interactions and behaviors within an organization. Each of these monitoring engines may collect data from a variety of device, network, and user behaviors while employing statistical and machine learning algorithms (assessing) to identify anomalies or ongoing changes of interest. ¶0100-0101: Figure 23 is a more detailed illustration of the operation of a risk analysis and scoring engine 1810. Anomalies gathered by monitoring engines 1801-1804 may be received by a risk analyzer 2310, which utilizes a number of analysis components to determine the relative risk level of each identified anomaly) determining which of the plurality of sensitive assets have a data loss risk score that satisfies a set of one or more criteria; and (¶0101: Analysis results may then be provided to a scoring engine 2320 that assigns risk score values to data points based on number of criteria, for example including but not limited to domain risk, malware alerts (for example, if a particular anomaly is similar to a known malware signature), forged “golden tickets” that may be used for access privileges and circumventing protections, abnormal connections to or from virtual private network (VPN) servers or clients, service accounts being used to access sensitive assets (as may indicate a compromised account or device), group memberships, or access rights for users or groups.) surfacing those of the DLP incidents corresponding to a subset of the plurality of sensitive assets having data loss risk scores that satisfy the set of one or more criteria. (¶0101: This scoring may then be used to produce graphs, reports, visualizations, or other output for review, or for producing alerts such as if threshold values for individual events, users, devices, groups, or other criteria are met.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Crabtree with regards to the data loss score to the method of Newman in order to identify attacks before data loss occurs (Crabtree ¶0016 - 0018). Newman in view Crabtree, Liu, and Watson does not disclose: based on detecting an access request for a first of the plurality of sensitive assets, determining a risk assessment of a requestor; generating an in-transit data loss risk score based, at least in part, on the data loss risk score of the first sensitive asset and the risk assessment of the requestor; and determining whether to surface a notification corresponding to the first sensitive asset based, at least in part, on the in-transit data loss risk score. However, Liu teaches based on detecting an access request for a first of the plurality of sensitive assets, determining a risk assessment of a requestor; (¶0004: As will be described in greater detail below, the instant disclosure describes various systems and methods for dynamic access control over shared resources (plurality of sensitive assets as seen in ¶0001-¶0002 these resource can be sensitive). In some examples, a computer-implemented method for dynamic access control over shared resources (1) detecting an attempt by a suer to access a resource via a computing environment (2) identifying a risk level of the user attempting to access the resource via the computing environment…. Further the limitation is exemplifying in ¶0013: In some examples, a system for implementing the above-described method may include (1) an access-detection module, stored in memory, that detects an attempt by a user to access a resource via a computing environment, (2) a risk-assessment module, stored in memory, that (A) identifies a risk level of the user attempting to access the resource via the computing environment ). generating an in-transit data loss risk score based, (¶0064: As example, the access control system may either generate for the actor and/or the context and a sensitivity score for the resource of identify existing risk and sensitivity scores generated by an external risk profiler. These scores may depend on various factors (such as Data Loss Prevention (DLP) considerations, machine learning techniques, file types, attributes, owner groups, content, metadata, data flow, colocation based on structural similarity etc.). Accordingly these score may be updated regularly and/or dynamically calculated. ) at least in part, on the data loss risk score of the first sensitive asset and the risk assessment of the requestor; and (¶0013: (2) a risk-assessment module, stored in memory, that (A) identifies a risk level of the user attempting to access the resource via the computing environment, (B) identifies a sensitivity level of the resource that the user is attempting to access via the computing environment, (C) identifies a risk level of the computing environment through which the user is attempting to access the resource, and then (D) determines an overall risk level for the attempt by the user to access the resource based at least in part on (I) the risk level of the user, (II) the sensitivity level of the resource, and (III) the risk level of the computing environment, and (3) an access-control module, stored in memory, that determines, based at least in part on the overall risk level for the attempt by the user, whether to grant the user access to the resource via the computing environment, and (4) at least one physical processor configured to execute the access-detection module, the risk-assessment module, and the access-control module.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Liu with regards to the access request to the method of Newman in view Crabtree, Liu, and Watson in order to ensuring that all users of an organization have sufficient access to resources while maintaining security such as data theft (Liu ¶0001-0003). Newman in view Crabtree, Liu, and Watson and Liu does not disclose: determining whether to surface a notification corresponding to the first sensitive asset based, at least in part, on the in-transit data loss risk score. Although, the prior art of Liu does disclose in ¶0098 wherein in addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive data to be transformed into risk and/or sensitivity scores, transform the data into risk and/or sensitivity scores, output a result of the transformation to determine whether to grant access to a resource, use the result of the transformation to improve dynamic access control over shared resources, and store the result of the transformation for future reference and/or use. Liu does not explicitly disclose surfacing a notification based on the in-transit data risk score. However, Watson teaches determining whether to surface a notification corresponding to the first sensitive asset based, at least in part, on the in-transit data loss risk score. (¶0029: A service such as an activity monitor 204 can monitor the access of the various documents and data by various users (such as first sensitive asset), and store the information to a location such as an activity log 206 or other such repository. A security manager 202 can work with the access manager 208 and/or activity monitor 204 to determine the presence of potentially suspicious behavior, which can then be reported to the customer console 102 or otherwise provided as an alert or notification. ¶0046: As further seen in Figure 6 illustrates an example process 600 for identifying anomalous activity that can be utilized in accordance with various embodiments. In this example, the activity of a user can be monitored 602 with respect to organizational documents, data, and other such objects. If, however, it is determined that the activity is anomalous, then the risk scores for the anomalous access (and other such factors) can be determined 614, which can be compared against various rules, criteria, or thresholds for performing specific actions. If it is determined 616 that the risk scores for the anomalous behavior warrant an alert, such as by the risk score being above a specified threshold, then an alert can be generated for the security team.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Watson with regards to surfacing notification based on risk score to the method of Newman in view Crabtree, Liu, and Watson and Liu in order to prevent data loss for a corpus of documents, and other data objects, stored for an entity and better manage data object maintaining a secure environment such as detecting anomalous behavior(Watson ¶0012 & ¶0021). With respect to claim 5, the combination of Newman in view Crabtree, Liu, and Watson teaches the method of claim 1 (see rejection of claim 1 above) further comprising tracking access activity of the plurality of sensitive assets and (Crabtree ¶0106-0108: As seen in Figure 28, risk analyzer 2608 and a scoring engine 2704. Anomalies gathered by monitoring engines 1801-1805 (tracking activity of the plurality of sensitive assets) may be received by a risk analyzer 2703, which utilizes a number of analysis components 2311-3214 to determine the relative risk level of each identified anomaly.). quantifying the tracked access activity by sensitive asset as historical scoring components for the plurality of sensitive assets; (Crabtree ¶0108-0110: The analyzer 2311 (historic scoring component) may be able to view typical category-level connections per session, per day, per month, per year, and compare those to expected values unique to the user, group, office location, and other relevant metrics. Contextualizing individual actions or behaviors may be used to ensure generated alerts or signals are accurate and useful for analysts and incident response personnel. A clustering-based analyzer 2312 (historic component) may be used to assign individual periods of activity into bins for an n-dimensional histogram. This may be used to enable review of many available datasets and models that may forecast individual or aggregate metrics over time on a user- or group-specific basis. A continuous metrics analyzer 2313 (historic component) may be used to implement statistical methods and time series modeling to constant streams of metric data, while a change-over-time analyzer 2314 (historic component) allows the system to compare expected and actual behavioral metrics for variables over time and combine continuous metrics with category-based detection to increase detection capabilities while reducing false positives. Analysis results may then be provided to a scoring engine 2704 that assigns risk score values to data points based on a number of criteria for example including but not limited to abnormal behavior patterns, developing or ongoing risk action pathways … ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Crabtree with regards to the tracking access activity to the method of Newman in view of Liu and Watson in order to reduce false positives when detecting to threats while ensuring data compliance and protection (Crabtree ¶0105 & 0108). With respect to claim 8, the combination of Newman in view Crabtree, Liu, and Watson teaches the method of claim 5 (see rejection of claim 5 above) further comprising, for each of the plurality of sensitive assets with tracked access activity, periodically updating the data loss risk score of the sensitive asset with the historical scoring component or maintaining the historical scoring component of the sensitive asset distinct from the data loss risk score of the sensitive asset. (Crabtree ¶0108: A continuous metrics analyzer 2313 may be used to implement statistical methods and time series modeling to constant streams of metric data, while a change-over-time analyzer 2314 allows the system to compare expected and actual behavioral metrics for variables over time and combine continuous metrics with category-based detection to increase detection capabilities while reducing false positives.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Crabtree with regards to the tracking access activity to the method of Newman in view of Liu, and Watson in order to reduce false positives when detecting to threats while ensuring data compliance and protection (Crabtree ¶0105 & 0108). Claims 2, 11, 15, 17,and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Newman et al. (US PGPub No.20180191771-A1) in view of Crabtree et al. (US PGPub No.20220377093-A1), Liu et al. (US-9807094-B1), Watson et al. (US PGPub No. 20180248895-A1), and Basavapatna et al. (US PGPub No.20130097709-A1). With respect to claim 2, the combination of Newman in view Crabtree, Liu, and Watson teaches the method of claim 1 (see rejection of claim 1 above) wherein determining the data loss risk score for each of the plurality of sensitive assets comprises: [obtaining the policy compliance risk assessment and] the user-based risk assessment for an organization responsible for the plurality of sensitive assets; (Crabtree ¶0096: In Figure 19, shows the operation of a human activity monitoring engine 1801. A human activity monitoring engine 1801 may collect data from a variety of sources 1901, including (but not limited to) communications via various channels (such as email, web-based chat, text messages, or phone calls), access logs for accounts or services, software installation or utilization statistics, work times or locations, or account login and logout records. These behavior data points may then be analyzed by comparing observed behavioral data 1903 against expected behavior 1902 according to an established behavioral model developed using statistical and machine learning techniques. This model may be based on initial expectations and then refined over time, applying techniques such as curve fitting to improve predictions and more accurately reflect observed “normal” behavior. An anomaly detector 1904 may then be used to identify mismatches between anticipated and actual behavior, which may then be provided as output to risk analysis and scoring engine 1810.) obtaining the cloud infrastructure risk assessment for each cloud infrastructure; and (Crabtree ¶0097: Figure 22 shows the operation of an organization activity monitoring engine 1804. An organization activity monitoring engine 1804 may collect data from a variety of sources 2201, including (but not limited to) organization charts, titles, interpersonal interactions, intra- or inter-departmental interactions, internal entity interactions (such as interactions between teams within an enterprise), or external entity interactions (such as interactions with clients or service providers) These behavior data points may then be analyzed by comparing observed behavioral data 2203 against expected behavior 2202 according to an established behavioral model developed using statistical and machine learning techniques. This model may be based on initial expectations and then refined over time, applying techniques such as curve fitting to improve predictions and more accurately reflect observed “normal” behavior. An anomaly detector 2204 may then be used to identify mismatches between anticipated and actual behavior, which may then be provided as output to risk analysis and scoring engine 1810. ). obtaining system configuration risk assessments for the plurality of sensitive assets, (Crabtree ¶0098: Figure 22 shows the operation of a system activity monitoring engine 1802. A system activity monitoring engine 1802 may collect data from a variety of sources 2101, including (but not limited to) system endpoints (for example, such as system monitor “sysmon” data, event logs, or error reports), network data collectors such as various infrastructure servers (for example, email, print or file servers), perimeter security devices such as a firewall or intrusion detection system (IDS), network security monitoring tools such as packet inspection software, or endpoint agents such as (for example, including but not limited to) OSQuery™ or Tanium™ services. These behavior data points may then be analyzed by comparing observed behavioral data 2103 against expected behavior 2102 according to an established behavioral model developed using statistical and machine learning techniques. This model may be based on initial expectations and then refined over time, applying techniques such as curve fitting to improve predictions and more accurately reflect observed “normal” behavior. An anomaly detector 2104 may then be used to identify mismatches between anticipated and actual behavior, which may then be provided as output to risk analysis and scoring engine 1810.). wherein determining the data loss risk score for each sensitive asset of the plurality of sensitive assets comprises aggregating [the policy compliance risk assessment,] the user-based risk assessment, the cloud infrastructure risk assessment of the cloud infrastructure hosting the sensitive asset, and the system configuration risk assessment for the system configurations of the sensitive asset into the data loss risk score for the sensitive asset. (¶0100-0101: Figure 23 is a more detailed illustration of the operation of a risk analysis and scoring engine 1810. Anomalies gathered (aggregating) by monitoring engines 1801-1804 may be received by a risk analyzer 2310, which utilizes a number of analysis components to determine the relative risk level of each identified anomaly) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Crabtree with regards to the data loss score to the method of Newman, Liu, and Watson in order to identify attacks before data loss occurs (Crabtree ¶0016 - 0018). Newman in view Crabtree, Liu, and Watson does not disclose: obtaining policy compliance risk assessment aggregating the policy compliance risk assessment, However, Basavapatna teaches obtaining policy compliance risk assessment (¶0030: As seen in Figure 2, further, in some examples, a behavioral risk profile for a user can include categorical risk profiles or be based on separate risk profiles that are developed characterizing types or categories of user behavior within the system 240, such as email behavior, network usage, access and use of enterprise-owned resources (such as confidential content or data of the enterprise), internet usage using system-affiliated devices, password protection, policy compliance, among others. aggregating the policy compliance risk assessment, (¶0041: Further, a user's risk score or reputation can be categorized, with distinct risk scores being generated in each of a variety of categories, from the events detected at the device 230, 235 using behavioral risk agents 215, 220, such as separate scores communicating the user's behavioral reputation in email use, internet use, policy compliance, authentication efforts (e.g., password strength), and so on.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention utilize the teachings of Basavapatna with regards to obtaining policy compliance risk assessment to the method of Newman in view Crabtree, Liu, and Watson in order to protect and maintain stable computers and systems by addressing types of weakness and risks within the system (Basavapatna ¶0003). With respect to claim 11, Newman teaches a non-transitory, machine-readable medium having program code stored thereon, the program code comprising instructions to: (¶0018-0019: Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that embodiments may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and comparable computing devices. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.). determine a set of one or more cloud infrastructures hosting sensitive assets for an organization; ((¶0015: As briefly described above, embodiments are directed to threat intelligence management in a security and compliance environment. In some examples, a threat explorer module of a security and compliance service may detect, investigate, manage, and provide actionable insights for threats at an organizational level. ¶0030: As seen in Figure 2A, In some examples, data to be analyzed, categorized, protected, and handled according to policies may come from a variety of sources such as a communications data store 202, a collaboration data store 204, and cloud storage 206.). quantify holistic data loss risk for sensitive assets of the organization hosted in the set of one or more cloud infrastructures, ( ¶0037: As shown in Figure 4, a system according to embodiment may receive communication and document metadata 402, 404 to correlate stored communications, documents, and non-document content in a multi-stage, correlated storage 442. Further inputs to the system may include audit activities 412, click traces 414, data loss prevention (DLP) hits 416. ¶0016: As used herein, contextual correlation (corresponding) refers to multi-stage evaluation and correlation of data such as communications, documents, and non-document content in light of associated metadata and activities. For example, deletion of documents in a particular location may be assessed for potential threat based on sensitive information contents of the documents, deleting person or entity, location of the deleting person or entity, etc. Thus, a more granular approach to threat assessment and management (quantify holistic data loss risk) may be achieved reducing false positives and allowing early detection of actual threats. ). [wherein the instructions to determine the holistic data loss risk comprise instructions to, for each cloud infrastructure, obtain a risk assessment of the cloud infrastructure; obtain system configuration risk assessments for the sensitive assets hosted in the cloud infrastructure; obtain a risk assessment of policy compliance corresponding to the cloud infrastructure and the organization; obtain user-based risk assessments for the sensitive assets hosted in the cloud infrastructure; determine a baseline data loss risk score for each sensitive asset hosted in the cloud infrastructure based on a combination of the risk assessments for the sensitive asset; and] surface, to a security operations center, data loss prevention (DLP) incidents of the sensitive assets based,[ at least in part, on the baseline data loss risk scores;] ( ¶0031-0035: As seen in Figure 2A, User Experiences such as threat intelligence user interface 222 may be provided as part of a security and compliance center (security operations center) 220 to present actionable visualization associated with various aspects of service and receive user/administration input to be provided to various modules. Figure 3, in an example configuration of diagram 300, a threat explorer 304 may receive as input threat feeds 308, which may include internal data, external threat data, and user profiles 326. Analyzing the threat data in light of contextual factors such as user profiles, activities, affected data types, etc., the threat explorer module 304 may generate threat alerts 306, remediation actions 310, and manage a threat intelligence dashboard 302.). track access activity corresponding to the sensitive assets; based on detection of an access request for one of the sensitive assets, determine a risk assessment of a requestor corresponding to the access request; for the sensitive asset corresponding to the detected access request, generate an in-transit data loss risk score based, at least in part, on the baseline data loss risk score of the sensitive asset and the risk assessment of the requestor; and determine whether to surface a DLP incident corresponding to the access request and the sensitive asset based, at least in part, on the in-transit data loss risk score. Newman does not disclose: wherein the instructions to determine the holistic data loss risk comprise instructions to, for each cloud infrastructure, obtain a risk assessment of the cloud infrastructure; obtain system configuration risk assessments for the sensitive assets hosted in the cloud infrastructure; obtain a risk assessment of policy compliance corresponding to the cloud infrastructure and the organization; obtain user-based risk assessments for the sensitive assets hosted in the cloud infrastructure; determine a baseline data loss risk score for each sensitive asset hos
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Mar 17, 2025
Non-Final Rejection — §101, §103
Jun 17, 2025
Interview Requested
Jun 24, 2025
Examiner Interview Summary
Jun 24, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Response Filed
Oct 30, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506662
SERVICE PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12505223
System & Method for Detecting Vulnerabilities in Cloud-Native Web Applications
2y 5m to grant Granted Dec 23, 2025
Patent 12491837
ELECTRONIC SIGNAL BASED AUTHENTICATION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Dec 09, 2025
Patent 12411931
FUEL DISPENSER AUTHORIZATION AND CONTROL
2y 5m to grant Granted Sep 09, 2025
Patent 12399979
PROVISIONING A SECURITY COMPONENT FROM A CLOUD HOST TO A GUEST VIRTUAL RESOURCE UNIT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month