Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,625

CONTEXT-BASED POLICY MAPPING FOR SECURITY COMPLIANCE

Final Rejection §101§103§112
Filed
Jan 29, 2024
Examiner
WYSZYNSKI, AUBREY H
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
635 granted / 710 resolved
+31.4% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
736
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
24.9%
-15.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 710 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Response to Arguments Applicant’s arguments and amendments, filed 12/23/25, with respect to claim 2, have been fully considered and are persuasive. The rejection of claim 2 under 35 U.S.C. 112 has been withdrawn. In view of Applicant’s amendments and arguments with respect to claims 1-20, have been considered but are moot in view of the new grounds of rejection. Applicant argues the combination of Lin and Hatch. However, a new ground of rejection under 35 USC 103 in view of Lin and further in view of Guy is made. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed towards an abstract idea. Claims 1, 6 and 12: Step 1: Statutory Category: Yes. As per claim 1, the claim recites a "method," qualifying it as a statutory "process". As per claim 6, the claim recites "non-transitory machine-readable media," and qualifies as a statutory “manufacture”. As per claim 12, the claim explicitly recites a "system" comprising hardware ("a first processor," "a second processor," "machine-readable medium"), this qualifies as a statutory "machine" or "system". Step 2A: Is the Claim Directed to a Judicial Exception? Prong 1: Does the claim recite an abstract idea? Yes, the claims are directed to the collection, analysis, and display of information and are generally considered abstract ideas. The claims recite: gathering information (identifying categories and attributes), organizing that information (generating a policy map), distributing the information (communicating it to other devices), collecting new data (probing and receiving real-time reports), analyzing it against a framework (detecting changes against a policy map), comparing data to a rule (identifying a deviation from a security policy), and acting on that rule (performing security policy enforcement). This aligns closely with the abstract concepts of "methods of organizing human activity" (administering a set of rules or policies) and "mental processes" (comparing newly collected data against a baseline map to spot deviations). Prong 2: Is the abstract idea integrated into a practical application? No, to integrate an abstract idea into a practical application, the claim must recite a specific technical improvement to the functioning of a computer or another technology. The instructions rely on generic, high-level computer operations: "identify," "generate," "configure," "probe," "detect," and "perform enforcement." There is no specific, non-conventional technical mechanism described for how the probing is executed or how the enforcement is technically achieved at the hardware or operating system level. It reads as a generic computer executing an abstract security protocol, which is insufficient to integrate the abstract idea into a practical application. The claim essentially says "apply an administrative security policy over a network." It uses computers as a generic tool to enforce a rule, rather than offering a specific, technical solution to a uniquely computing-centric problem. Therefore, it does not integrate the abstract idea into a practical application. Step 2B: Inventive Concept (Significantly More): No, the claim is directed to an abstract idea and we must look for an "inventive concept” or elements that transform the nature of the claim into a patent-eligible application. The additional elements here consist of generic computer components performing routine, conventional activities: "Detecting login events" is a basic OS/network function. "Communicating a policy map" or "communicating a real-time report" is routine data transmission. "Probing... for data" is conventional data gathering. Using a "first processor" to communicate with a "second processor" over a network to enforce a policy is well-understood, routine, and conventional computer activity. Even the final step, "performing security policy enforcement," is stated at a high level of functional abstraction. It does not specify a non-conventional or innovative technical method of enforcement. Simply appending generic computer functionality to an abstract idea does not provide the "significantly more" required in Step 2B. Claims: 2, 7, 13: These claims introduce a timeout feature. If a "first time period" elapses before probing is complete, the system falls back on sending a "cached report" from an earlier time period. While caching and timeouts are useful, they are examples of well-understood, routine, and conventional computer functions. Relying on a cache when a real-time process times out does not amount to an "inventive concept" (Step 2B). It simply applies standard computer behavior to the abstract idea of reporting compliance data. Claims: 3, 4, 8, 9, 14, 16, 17: These claims specify how or where the data is gathered. They state that the categories are identified by "receiving... indications" of changes (3, 8, 14), or that the probing is done by checking "event data logged on the medium" (4, 9, 16), or probing based on login indications (17). Gathering data by reading an event log, receiving a data transmission, or triggering a scan based on a login event are basic, generic computer operations. Simply appending conventional data gathering steps to an abstract idea does not make it patent-eligible. Claims: 5, 10, 11, 15, 18, 19, 20: These claims specify the types of data being analyzed or the environment in which the system operates. They limit the categories to "cybersecurity software and system software" (5, 11, 19), specify that attributes indicate "manufacturers and product versions" (10, 15, 18), or define the "context" as including IP addresses, zones, or devices (20). Limiting an abstract idea to a specific technological environment or a specific field of use does not confer patent eligibility. Analyzing software versions or IP addresses is still just collecting and analyzing data. Adding these specific data types does not provide a technical solution to a technical problem. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 1, the claim recites "identifying a deviation in behavior from a security policy for the first device." However, the term "security policy" has not been introduced previously in the claim. The claim earlier discusses "categories of security compliance" and a "policy map." If the "security policy" is meant to be the "policy map," it needs to be explicitly claimed as such (e.g., "...deviation in behavior from the policy map"). If it is a separate element, it lacks proper introduction, rendering the claim indefinite. As per claim 6: the claim states to perform enforcement based on "a real-time report indicating the changes." However, this claim never actually introduces or generates a real-time report in the preceding steps. It just suddenly appears at the end and lacks antecedent basis. like in claim 1, this claim suddenly recites "a security policy for the one or more... media" at the very end without ever introducing it or explicitly linking it to the "policy map." "generate a policy map based indicating the at least one..." — The phrase "based indicating" is grammatically incorrect and renders the limitation unclear. It is assumed to read "indicating" or "based on." "context of the one or more non-transitory machine-readable media" is unclear. A device or a network might have a "context" (e.g., location, user role), but it is very difficult to define the "context" of a physical storage medium. Also, "configure the policy map on the medium" is unclear what "configuring" a map on a storage medium entails from a technical standpoint. As per claim 12: “detecting a login event at a second machine-readable medium", a user or system typically logs into a device, a network, or an application, not a "machine-readable medium" (which is just a storage component like a hard drive or RAM). Claiming a login event at a storage medium is technologically unclear. "context of the first machine-readable medium" like claim 6, assigning a "context" to a physical storage medium is unclear. “for security policy enforcement" the claim ends with the phrase "...to the first machine-readable medium for security policy enforcement." This is a statement of intended use rather than a positive, structural limitation or an action performed by the claimed system. Furthermore, "security policy" lacks proper antecedent basis, as the claim previously only discussed a "policy map" and "security compliance." Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al, US 2023/0388282 and further in view of Guy et al, US 2022/0329604. Regarding claim 1, Lin discloses a method comprising: at a first device (Fig. 3, 226: authorization service), identifying at least one of categories and category attributes of security compliance for a context of the first device (Policy Engines 242. Paragraph 0063: FIG. 3, the policies engines 242 may include a logic policy engine 302, a human policy engine (e.g., a human verification policy engine) 304, and an AI policy engine 306 to determine the permissions and access levels that may be granted to a user based on the particular user attributes of the user.) wherein the categories comprise cybersecurity categories, and wherein the category attributes indicate states of enablement and characteristics related to corresponding cyber security categories (0069: the AI policy engine 306 may communicate with a security system of the enterprise subsystems 312 to modify authorization threshold levels depending on a threat level. In this case, the authorization threshold levels may be modified, for example, by limiting the authorization policies of some of the lower authorization tiers 308 from being executed, or by increasing the authentication requirements for at least some of the authorization tiers 308. For example, the app functions 244, the authorization policies generated by the logic policy engine 302, the user attributes associated thereto, the level of authentication required by the authorization policies, and/or the like may be categorized by the AI policy engine 306 into various privileges, and those categorized as higher privileges may be prevented from being used until normal operations resume.); generating a policy map indicating the at least one of categories and category attribute (0058: The policy engines 242 may generate various authorization policies according to the app functions 244 and the app requirements 245 defined by the application providers 122, and may generate a corresponding application profile for each of the applications App 1 to App N to associate (e.g., to tie or to map) each of the authorization policies to various expected user attributes that, when present, triggers the execution of a corresponding authorization policy to generate the authorization token.); and based on detecting login events at one or more devices communicatively coupled to the first device, communicating the policy map to the one or more devices (0058: Accordingly, when a user requests login to a particular application, the authentication service 224 may use the application profile of the particular application to filter the user attributes retrieved from the contacts MDM service 222 to generate the app user profile for the user, and the authorization service 226 may use the app user profile to determine which of the authorization policies to execute in order to generate an appropriate authorization token for the particular application.); and at each second device of the one or more devices, probing the second device for data that indicates changes to the at least one of categories and category attributes indicated by the policy map (Fig. 4, block 420. Paragraph 0078: the application profile for an application may be generated by the authorization service 226 to map the authorization tier to certain user attributes defined by the application requirements, and the application profile may be used (e.g., by the authentication service 224) to filter all of the user attributes of the user stored in the contacts MDM 222 to include only the user attribute set relevant to the application in order to provide the suitable authorizations to the user. Fig. 5, 520-525); and based on completion of the probing during a first time period, communicating a real-time report indicating results of the probing to the first device (0070: AI policy engine 306 may communicate with a global change request system of the enterprise subsystems 312 to identify production deployment events during which time-limited authorizations may be granted to some users having appropriate user attributes. Lin lacks or does not expressly disclose a deviation in behavior from a security policy based on changes to the at least one of categories and category attributes and performing security policy enforcement based on the identified deviation. However, Guy teaches at the first device, identifying a deviation in behavior from a security policy for the first device based on the real-time report wherein identifying the deviation in behavior is based, at least in part, on changes to the at least one of categories and category attributes indicated by the results of the probing; and performing security policy enforcement based on the identified deviation (0027: the computer system and the operator portal can execute Blocks of the method S100 to: generate a manifest of endpoint devices and they security technology configurations; ingest a security policy for the computer network; detect deviations from the security policy in configurations of a subset of these endpoint devices; and selectively prompt security personnel to investigate (e.g., reconfiguration, quarantine) this subset of endpoint devices. 0116: the computer system: accesses a security policy for the computer network in Block S170; and generates a prompt to selectively investigate endpoint devices in Block S172 based on deviation from the security policy. More specifically, the computer system can: access the security policy that defines rules for combinations and configurations of security technologies (and non-security tools) deployed on endpoint devices connected to the computer network; compare these rules to endpoint device configurations recorded in the current manifest; detect differences between these rules and configurations of individual endpoint devices (or groups, clusters of endpoint devices); and selectively prompt security personnel to investigate these endpoint devices. For example, in response to identifying a particular endpoint device that deviates from a security technology configuration rule contained in the security policy, the computer system can prompt security personnel to: quarantine the endpoint device; push a systems or security technology update to the endpoint device; or limit account or user access at the endpoint device until the endpoint device is properly reconfigured.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Lin with Guy to include a deviation in behavior from a security policy based on changes to the at least one of categories and category attributes and performing security policy enforcement based on the identified deviation in order to properly reconfigure an endpoint, as taught by Guy, (0116). Regarding claim 2. Lin lacks or fails to expressly disclose a cached report. However, Guy teaches further teaches, based on timing out of the first time period prior to completion of the probing, communicating a cached report indicating results of probing from a second time period prior to the first time period to the first device (0128: the computer system can compare a prior manifest to a current manifest in order to generate a real-time or near real-time representation of any change events occurring for a selected set of devices on the computer network over a corresponding time period. Alternatively, the computer system can compare sets or groups of current and prior manifests to detect and/or determine large scale patterns of security policy compliance or non-compliance for the selected device or set of devices. 0132: correlate and compare the periodic manifests to detect endpoint devices that have changed status in the implementation of required encryption technologies). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Lin with Guy to include a cached report in order to compare any change of events over a corresponding time period, as taught by Guy, 0128. Regarding claim 3, Lin, as modified above further teaches the method of claim 1, wherein identifying the at least one of categories and category attributes of security compliance for the context of the first device comprises identifying the at least one of categories and category attributes based, at least in part, on receiving, at the first device, indications of one or more changes to security compliance for the context of the first device (0028: the IAM system may be communicably connected to a global change request system to identify change requests corresponding to production deployment times that may affect the particular application, and may modify the user's access levels for the particular application for a suitable duration at the production deployment times. As another example, the access levels granted to a user for a particular application may be dynamically modified based on a security event, for example, such as when the organization is under a current threat or attack. In this case, the IAM system may limit access rights to applications and systems for all or some of the users in order to reduce risks to the organization until the threat or attack has dissipated.). Regarding claim 4, Lin, as modified above further teaches the method of claim 1, wherein probing the second device comprises probing event data logged on the second device for changes to the at least one of categories and category attributes (0070: the AI policy engine 306 may communicate with a global change request system of the enterprise subsystems 312 to identify production deployment events during which time-limited authorizations may be granted to some users having appropriate user attributes. For example, in response to a production deployment event, the AI policy engine 306 may identify users having a production deployment tag attribute to grant enhanced permissions or access levels to the identified users for the affected applications for a duration of the production deployment event.). Regarding claim 5, Lin, as modified above further teaches the method of claim 1, wherein the cybersecurity categories comprise categories of cybersecurity software and system software and wherein the state of enablement and characteristics related to corresponding cybersecurity categories comprise states of enablement and characteristics of the cybersecurity software and system software (0069: the app functions 244, the authorization policies generated by the logic policy engine 302, the user attributes associated thereto, the level of authentication required by the authorization policies, and/or the like may be categorized by the AI policy engine 306 into various privileges, and those categorized as higher privileges may be prevented from being used until normal operations resume.). As per claims 6-9, 11-14, 16-17 and 19, this is a medium and system version of the claimed method discussed above in claims 1-5 wherein all claimed limitations have also been addressed and/or cited as set forth above. Regarding claims 10, 15 and 18, Lin, as modified above further teaches wherein the at least one of categories and category attributes of security compliance indicate at least one of manufacturers and product versions (0028: a user may be tagged with an attribute (e.g., a role attribute) indicating that the user is a product deployment engineer, and thus, may need super user permissions for a particular application during a production deployment event, but not as much permissions at other times. In this example, the IAM system may be communicably connected to a global change request system to identify change requests corresponding to production deployment times that may affect the particular application, and may modify the user's access levels for the particular application for a suitable duration at the production deployment times.). Regarding claim 20, Lin, as modified above, further disclose the system of claim 12, wherein the context of the first machine-readable medium comprises at least one of a source zone, a destination zone, a source Internet Protocol (IP) address, a destination IP address, a source device, and a destination device (0040: the interfaces 212 and 214 may support various protocols (e.g., TCP/IP, User Datagram Protocol UDP, Hypertext Transfer Protocol HTTP, Internet Message Access Protocol IMAP, Simple Mail Transfer Protocol SMTP, and/or the like) and/or data communication interfaces (e.g., APIs, Web Services, and/or the like) for facilitating data communications with the application providers 122 and the identity providers 124.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. -Any inquiry concerning this communication or earlier communications from the examiner should be directed to AUBREY H WYSZYNSKI whose telephone number is (571)272-8155. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUBREY H WYSZYNSKI/Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §101, §103, §112
Dec 09, 2025
Interview Requested
Dec 23, 2025
Response Filed
Mar 16, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598211
CYBERATTACK SCORING METHOD, CYBERATTACK SCORING APPARATUS, AND COMPUTER READABLE STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM CYBERATTACK SCORING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12592932
METHOD AND SYSTEM FOR AN INTEGRATED PROCESS TO STREAMLINE PRIVILEGED ACCESS MANAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580964
OPTIMIZATION FOR ACCESS POLICIES IN COMPUTER SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12580887
SCALABLE FLOW DIFFERENTIATION FOR NETWORKS WITH OVERLAPPING IP ADDRESSES
2y 5m to grant Granted Mar 17, 2026
Patent 12580967
CONTEXTUAL SECURITY POLICY ENGINE FOR COMPUTE NODE CLUSTERS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+12.6%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 710 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month