Prosecution Insights
Last updated: April 19, 2026
Application No. 18/080,535

AUTOMATED POLICY COMPLIANCE

Final Rejection §101§103
Filed
Dec 13, 2022
Examiner
CRANDALL, RICHARD W.
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Red Hat Inc.
OA Round
4 (Final)
30%
Grant Probability
At Risk
5-6
OA Rounds
3y 1m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
90 granted / 301 resolved
-22.1% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
343
Total Applications
across all art units

Statute-Specific Performance

§101
34.6%
-5.4% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 301 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to correspondence received January 23, 2026. Claims 1, 9, and 17 are amended. Claims 1-20 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s): Claims 1 and 17: A method comprising: receiving a body of training data comprising a policy document and policy action codes that correspond with the policy document, wherein the policy action codes comprise machine- readable computer code to implement policies contained in the policy document; processing the policy document to generate a structured dataset of policy actions; generating a trained model to create a mapping between the policy actions and the policy action codes; receiving a new policy document; generating new policy action codes from the new policy document using the trained model, wherein the new policy document is compiled to generate a new structured dataset of policy actions, wherein the new structured dataset of policy actions includes at least policy subjects identified based on an analysis of labeled text within the new policy document and policy variables configured to provide boundaries for new policy actions; updating the training data based on the new policy document; testing the trained model using a portion of the training data updated based on the new policy document, wherein an error value is computed for each test sample and a distribution of test sample errors is computed; and sending the new policy action codes Claim 9: receive a body of training data comprising a policy document and policy action codes that correspond with the policy document, wherein the policy action codes comprise machine-readable computer code to implement policies contained in the policy document; process the policy document to generate a structured dataset of policy actions; generate a trained model to create a mapping between the policy actions and the policy action codes; receive a new policy document ; generate new policy action codes from the new policy document using the trained model, wherein the new policy document is compiled to generate a new structured dataset of policy actions, wherein the new structured dataset of policy actions includes at least policy subjects identified based on an analysis of labeled text within the new policy document and policy variables configured to provide boundaries for new policy actions; update the training data based on the new policy document; test the trained model using a portion of the training data updated based on the new policy document, wherein an error value is computed for each test sample and a distribution of test sample errors is computed; send the new policy action codes audit the computing environment using the new policy action codes to generate a human-readable compliance validated document; and store the human-readable compliance validated document The abstract idea in claims 1, 9, and 17, which are similar in scope (claim 9 has two more limitations than 1 and 17), are a mental process as these steps are those that can be performed mentally or with the aid of pen and paper, and additionally recite mathematical relationships that are mathematical relationships. First, training data can be received mentally through observation. This includes machine readable code which as described in pars 052, 059, could be text (“policy documents and…policy actions”). Then, one could mentally process the received information and generate a structured dataset, which could be a table written on paper. Then, one could generate a trained model mentally by taking the dataset and mapping between actions and codes. This could be done on paper. Then, one could receive a new policy document by looking at it, and the new policy document that is “compiled” (assembled) “to generate” (for the purpose of generating) a new structured dataset (data set is information – structured means rules), identified based on an analysis of labeled text (mental step of looking at labels and analyzing them). Then, one could generate new action codes by using the mapping previously created. Then one could update training data based on new policy document, by changing values that are different from the new document to the old document. Finally, the new policy action codes can be mentally sent by writing them on pen and paper. Per claim 9, a computing environment can be audited using new action codes to generate a document that is human readable, ie, the previously determined new action codes can be compared to a computing environment through observation and a document generated ie written. Then a document can be stored ie written on pen and paper which is storage of a document as a document is defined by its contents. Further, policy subjects are identified based on the new policy document and configuring policy variables to provide boundaries for new policy actions are mental judgments because one can identify a subject and then configure (set, determine) variables to provide boundaries (for example upper or lower limits, like a budget) for new policy actions. Then one could update training data based on new policy document, by changing values that are different from the new document to the old document. These steps are therefore mental process steps. Then, per testing the trained model using training data updated based on the new policy document, one can compute an error value and make a distribution of test sample errors by using mathematical relationships such as Boolean compare, and counting (counting FALSE), then summing the values and plotting the values as a distribution. . This judicial exception is not integrated into a practical application because the elements both individually and in combination amount to instructions to apply the abstract idea to a generic computer. The processing, memory, “system comprising…memory, processor,” and to/from a client device amount to no more than instructions that the abstract idea will be performed on a computer, in any way, and information is sent to and from a client device. This is similar to examples such as TLI Communications and FairWarning in MPEP 2106.05(f)(2) and therefore is no more than apply it steps. For this reason, there is not a practical application of the abstract ideas claimed in the independent claims. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because for the same reasons that there is not a practical application, the combination of additional elements are not significantly more than the abstract idea. The reasoning is carried over because, as the combination of elements claimed are merely instructions to apply a generic (or ordinary) computer to the abstract idea, “apply it” limitations are not significantly more than the abstract idea. Therefore, Applicant has not recited significantly more than the abstract idea. Claims 2 and 10, which are similar in scope, further define the abstract idea by taking the new policy document to through a mental process (observation, judgment), generate a new structured data set and then inputting into a trained model. Inputting into a trained model is, under a broadest reasonable interpretation in light of the specification, a mental process as a trained model can have inputs (variables) which can receive information, the extent of the claim. Claims 3 and 11, which are similar in scope, is similar to the final limitation in claim 1 which, as the codes were previously sent, now sending new policy actions is a further mental process step and to a client device is a mere apply it instruction. Claims 4, 12, and 18, which are similar in scope, has an apply it limitation with “using a natural language processor,” and the limitations process the policy document…to generate labeled text is a further mental process step. As claimed, using a NLP is an apply it step because it is equivalent to applying an NLP to the abstract idea, as the NLP’s is only claimed in terms of its functional result and not with detail as to the steps which the NLP performs. Claims 5, 13, and 19, which are similar in scope, has an additional element of a classifier but it is an applied element as it is described in terms of its functional result and moreover is described in the specification as “any suitable type of classifier,” therefore only limited by its functional result. Par 034. The steps are further mental process steps because they are identifying information from label text, which could be done mentally through observing label text and making judgments about the information within. Claims 6, 14, and 20, which are similar in scope, further describe the mental process of the independent claims. A human readable natural language text document is text, which one can mentally process by reading it. Claims 7 and 15, which are similar in scope, further describe the policy document as a standard issued by a standards organization. This is a part of the mental process identified in the independent claims because one could read this kind of policy document and in fact this would be typical or normal to do so, and reading is the mental process of observation. Claims 8 and 16, which are similar in scope, recite the additional element of defining the trained model. This is an apply it element because though the trained models are kinds of AI there is no detail as to how they are implemented or trained and therefore they are recited as instructions to apply the AI model to the abstract idea. Therefore, claims 8 and 16 do not recite a practical application or significantly more . Therefore claims 1-20 are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bulut et al., US PGPUB 20210055933 A1 ("Bulut"), in view of Hernandez et al., US PGPUB 20210120041 A1 (“Hernandez”), further in view of Durvasula et al., US PGPUB 20230123077 A1 (“Durvasula”). Per claims 1, 9, and 17, which are similar in scope, Bulut teaches receiving a body of training data comprising a policy document and policy action codes that correspond with the policy document, wherein the policy action codes comprise machine- readable computer code to implement policies contained in the policy document in par 020: “Given the above problem with current compliance policy management and/or scheduling technologies not discovering dependencies between compliance policies and/or scheduling parallel execution plan(s) to execute two or more compliance policies simultaneously, the present disclosure can be implemented to produce a solution to this problem in the form of systems, computer-implemented methods, and/or computer program products that can: a) use previous history of compliance policy dependencies to train a machine learning model to determine whether a given set of policies (e.g., two policies (P.sub.1, P.sub.2)).” See also par 089: “Input: previously submitted compliance policy description and corresponding code pairs {{D.sub.i, D.sub.j,} {C.sub.i, C.sub.j}} and their associated labels L.sub.ij {1, 0}, where the label designations 1 and 0 can represent dependency or not and direction of dependency.” For implementing policies contained in the policy document see par 020: “c) add compliance policy P.sub.i to a Directed Acyclic Graph (DAG) and repeat the process until all unknown dependencies are determined; d) schedule a compliance policy execution based on the number of parallelism (e.g. number of available central processing units (CPU)) and DAG; and/or e) use operational data (e.g., service tickets, request tickets, incidents, etc.) and expert feedback (e.g., subject matter expert (SME) feedback to update the dependencies.” Bulut then teaches processing the policy document to generate a structured dataset of policy actions in par 063: “In some embodiments, compliance policy management and scheduling system 102, policy analyzer component 108, and/or trainer component 202 can compile such historical data into a historical data index (e.g., a log) that can be stored on a memory device such as, for instance, memory 104 and/or a remote memory device (e.g., a memory device of a remote server).” See pars 092-093 for the structured dataset: “"Store {T.sub.i,j, C.sub.i,j} as a feature vector in a list X, where list X can be stored on a memory device such as, for instance, memory 104 and/or a remote memory device a memory device of a remote server). Store L.sub.ij in a list Y, where list Y can be stored on a memory device such as, for instance, memory 104 and/or a remote memory device (e.g., a memory device of a remote server)." See also par 059: “The policy analyzer component 108 can employ a model and/or a natural language extraction process to identify such similarities defined above based on a cosine similarity between vector representations of text extracted from, for instance, descriptions of compliance policies, codes of compliance policies, and/or a common weakness enumeration (CWE) system corresponding to respective compliance policies. For example, policy analyzer component 108 can employ a natural language extraction process defined above and/or a model that has been trained (e.g., via trainer component 202 as described below with reference to FIG. 2) to identify such similarities based on such a cosine similarity. In some embodiments, policy analyzer component 108 can employ a model defined above and/or a natural language extraction process to identify such similarities between a certain new compliance policy and existing compliance policies (e.g., historical compliance policies), where dependency relationships and/or corresponding directions of such dependency relationships between such existing compliance policies are known (e.g., dependency relationships and/or corresponding dependency relationship direction(s) that have been verified by, for instance, an expert entity, operational data, etc.). In some embodiments, for example, where policy analyzer component 108 identifies one or more such similarities defined above between a certain new compliance policy and a certain existing compliance policy that has one or more dependency relationships with one or more other existing compliance policies, policy analyzer component 108 can determine that the new compliance policy has the same one or more dependency relationships and/or corresponding dependency relationship direction(s) with such other existing compliance policies.” For structured dataset which takes from the policy analyzer component, see par 061: “In some embodiments, compliance policy management and scheduling system 102 can present to such an expert entity defined above one or more dependency relationships and/or corresponding dependency relationship direction(s) identified by policy analyzer component 108 and/or receive feedback data from such an expert entity corresponding to the dependency relationship(s) and/or corresponding direction(s) identified by policy analyzer component 108. For example, compliance policy management and scheduling system 102 can comprise an interface component including, but not limited to, an application programming interface (API), a graphical user interface (GUI), and/or another interface component that can present (e.g., via a computer monitor, a display, a screen, etc.) to such an expert entity defined above the dependency relationship(s) and/or corresponding direction(s) identified by policy analyzer component 108 and/or receive feedback data from the expert entity corresponding to the dependency relationship(s) and/or corresponding direction(s) identified by policy analyzer component 108.” As the dependency relationships are identified as pairs (see par 020), they are a structured dataset. Bulut then teaches generating, by a processing device, a trained model to create a mapping between the policy actions and the policy action codes in par 063: “In some embodiments, such historical data can comprise training data that policy analyzer component 108 can use to learn (e.g., via active learning, explicit learning, implicit learning, etc.) to identify one or more dependency relationships and/or corresponding dependency relationship direction(s) between compliance policies.” See also par 094: “Train a model (e.g., a support vector machine (SVM) model) using list X as input and list Y as output. Calculate text similarity T.sub.i,j and code similarity C.sub.i,j scores for description D and code C. For example, calculate text similarity by calculating cosine similarity using equation (1) below.” See also par 091. Bulut then teaches receiving a new policy document in par 0106: “In some embodiments, based on receiving a new compliance policy P.sub.7, policy analyzer component 108 (e.g., via policy correlation ML model 302) can identify (e.g., as described above with reference to FIGS. 1, 2, and 3) one or more dependency relationships (denoted as correlated to in FIG. 4) and/or corresponding dependency relationship direction(s) between new compliance policy P.sub.7 and existing compliance policies P.sub.2 and P.sub.5 as illustrated in FIG. 4” See also par 097: “In some embodiments, policy analyzer component 108 can receive a new compliance policy (denoted in FIG. 3 as New policy (P.sub.k). In some embodiments, policy analyzer component 108 can employ policy correlation ML model 302 to identify one or more dependency relationships and/or corresponding dependency relationship direction(s) between such a new compliance policy P.sub.k and one or more existing compliance policies denoted as P.sub.i, P.sub.j in FIG. 3 (e.g., the one or more compliance policies defined above).” Bulut then teaches generating new policy action codes from the new policy document using the trained model, in par 097: “In some embodiments, policy analyzer component 108 can employ policy correlation ML model 302 to identify one or more dependency relationships and/or corresponding dependency relationship direction(s) between such a new compliance policy P.sub.k and one or more existing compliance policies denoted as P.sub.i, P.sub.j in FIG. 3 (e.g., the one or more compliance policies defined above).” See also par 0107 for another teaching of generating new policy action codes from the new policy document. Bulut then teaches wherein the new policy document is compiled to generate a new structured dataset of policy actions, in par 098: “In some embodiments, policy correlation ML model 302 can comprise a model defined above that has been trained (e.g., via trainer component 202 as described above with reference to FIG. 2) to identify one or more dependency relationships between compliance policies and corresponding dependency relationship direction(s) based on one or more features such as, for example, compliance policy description similarity (denoted as Description Similarity in FIG. 3), compliance policy code similarity (denoted as Code Similarity in FIG. 3), and/or compliance policy weakness similarity (denoted as Weakness Similarity in FIG. 3) as described above with reference to FIG. 1.” See also par 099: “, policy analyzer component 108 can employ expert entity 306 to validate and/or invalidate one or more dependency relationships between compliance policies and/or corresponding dependency relationship direction(s) that have been identified by policy analyzer component 108 and/or policy correlation ML model 302. For example, policy analyzer component 108 can employ expert entity 306 to validate and/or invalidate one or more dependency relationships and/or corresponding dependency relationship direction(s) between new compliance policy P.sub.k and the one or more existing compliance policies P.sub.i, P.sub.j that have been identified by policy analyzer component 108 and/or policy correlation ML model 302.” Note that expert entity includes a machine learning model, see par 099. Bulut then teaches wherein the new structured dataset of policy actions includes at least policies identified based on an analysis of labeled text within the new policy document in par 0110-0118: “In some embodiments, system 500 can comprise an illustration of how compliance policy management and scheduling system 102 (e.g., via policy analyzer component 108, policy correlation ML model 302, expert entity 306, etc.) can actively learn to classify one or more compliance policy dependency relationships and/or corresponding dependency relationship direction(s) between compliance policies in accordance with one or more embodiments of the subject disclosure described herein. For example, system 500 can comprise an illustration of how compliance policy management and scheduling system 102 (e.g., via policy analyzer component 108, policy correlation ML model 302, expert entity 306, etc.) can actively learn to classify such one or more compliance policy dependency relationships and/or corresponding dependency relationship direction(s) between compliance policies using, for instance, algorithm (3) defined below. [0111] Algorithm (3) [0112] Inputs: Labeled set D.sub.l, submitted policy P.sub.i. [0113] Train a classifier f.sub.l based on training data D.sub.l. [0114] while True [0115] Predict a label for a given code C based on code and description similarity. [0116] Use feedback/operational data to validate that code C worked or not {l.sub.1, l.sub.2}. [0117] Update training data (D.sub.l) with policy P.sub.i and its label. [0118] Retrain a classifier f.sub.i using D.sub.l.” and policy variables configured to provide boundaries for new policy actions.” These paragraphs describe, according to par 0109, the compliance policy management and scheduling in accordance with the embodiments described herein, therefore the embodiment described in pars 098-099 which describes receiving a new compliance policy document, is actively classified and labeled based on the teachings of 0109-0118. Labeled text is being analyzed is taught in par 0112, because the claim recites “labeled text” and the training set D.sub.i . The policy P.sub.i is the policy see par 097 for examples: compliance policies are “Policy_linux_password, policy_linux_ssh and policy_linux_file_permissions” The training set D.sub.i that is labeled is labeled text because the training set includes text see par 085: “historical data defined above (e.g., dependency data corresponding to historical compliance policies, expert feedback, and/or operational data feedback)” Bulut then teaches updating the training data based on the new policy document; training data updated based on the new policy document in par 0102: “In some embodiments, policy analyzer component 108 can collect (e.g., from database 304) and/or use such operational data as training data to actively learn (e.g., as described above with reference to FIGS. 1 and 2) to identify one or more dependency relationships and/or corresponding dependency relationship direction(s) between a subsequently received new compliance policy and existing compliance policies.” See also par 0108. This refers back to teachings in par 098: “In some embodiments, policy correlation ML model 302 can comprise a model defined above that has been trained (e.g., via trainer component 202 as described above with reference to FIG. 2) to identify one or more dependency relationships between compliance policies and corresponding dependency relationship direction(s) based on a history of verified policy dependencies that can be stored in database 304 as illustrated in FIG. 3. In some embodiments, such a history of verified policy dependencies can comprise dependency data corresponding to historical compliance policies (e.g., dependency relationship(s) and/or corresponding dependency relationship direction(s) between historical compliance policies that have been validated or invalidated). In some embodiments, policy correlation ML model 302 can comprise a model defined above that has been trained (e.g., via trainer component 202 as described above with reference to FIG. 2) to identify one or more dependency relationships between compliance policies and corresponding dependency relationship direction(s) based on one or more features such as, for example, compliance policy description similarity (denoted as Description Similarity in FIG. 3), compliance policy code similarity (denoted as Code Similarity in FIG. 3), and/or compliance policy weakness similarity (denoted as Weakness Similarity in FIG. 3) as described above with reference to FIG. 1.” See also par 087: “In some embodiments, trainer component 202 can train a model defined above to learn one or more dependency relationships and/or corresponding dependency relationship direction(s) between compliance policies based on previously obtained historical data defined above (e.g., dependency data corresponding to historical compliance policies, expert feedback, and/or operational data feedback) using, for instance, algorithm (2) defined below.” Bulut then teaches and sending the new policy action codes in par 0100: “scheduler component 110 can generate one or more policy execution plans that can be executed by one or more computing resources of cloud computing environment 308. For example, scheduler component 110 can generate a parallel policy execution plan comprising Run List 1, Run List 2, and Run List 3 as illustrated in FIG. 3, where compliance policies of each of such run lists can have dependency relationships with one another but not with compliance policies of the other run lists. For instance, as illustrated in FIG. 3, scheduler component 110 can generate a parallel policy execution plan comprising Run list 1, Run List 2, and Run List 3, where compliance policies P.sub.1, P.sub.7, P.sub.9 of Run List 1 can have dependency relationships with one another but not with compliance policies P.sub.2, P.sub.3, P.sub.10 or compliance policies P.sub.4, P.sub.5, P.sub.8 of Run List 2 and Run List 3, respectively.” See also par 0121: “For example, at a time t.sub.2, scheduler component 110 can schedule simultaneous execution of compliance policies of execution plans E.sub.1 and E.sub.2 illustrated in FIG. 6B. In this example, execution plan E.sub.1 can comprise compliance policy P.sub.2 that can be executed by a first computing resource (e.g., a first computing resource of cloud computing environment 308) and execution plan E.sub.2 can comprise compliance policy P.sub.3 that can be executed simultaneously with compliance policy P.sub.2 by a second computing resource (e.g., a second computing resource of cloud computing environment 308).” Which teaches sending. Per claim 9, Bulut teaches audit the computing environment using the new policy action codes to generate a human-readable compliance validated document in par 0127: “In another example, by generating such an updated directed acyclic graph and/or such a policy execution plan based on the updated directed acyclic graph as described above, compliance policy management and scheduling system 102 can facilitate reduced processing cycles performed by a processing unit (e.g., processor 106) to accurately execute the compliance policies of such a policy execution plan. For instance, compliance policy management and scheduling system 102 can facilitate reduced processing cycles performed by a processing unit (e.g., processor 106) associated with compliance policy management and scheduling system 102 and/or a computing device that executes one or more compliance policies of the policy execution plan, thereby facilitating at least one of improved accuracy, efficiency, or performance of such a processing unit (e.g., processor 106), as well as reduced computation cost of such a processing unit.” See also par 067 where DAG graph is generated visually and therefore human-readable compliance validated document. Bulut then teaches and store the human-readable compliance validated document in a non-transitory computer-readable storage medium in par 0102: where compliance policy is stored in a database. Bulut does not teach wherein policy subjects are identified based on the new policy document and policy variables configured to provide boundaries for new policy actions. Hernandez teaches assessing policies that govern operations of enterprise components of an enterprise computing system. See abstract. Hernandez teaches wherein policy subjects are identified based on the new policy document and policy variables configured to provide boundaries for new policy actions in par 027 where a coverage component identifies the policy type and determines whether a policy is in scope or out of scope of other regulations (policy variable). Then in conjunction with par See par 027: “The coverage component 114 can receive regulation data associated with various regulations and standards (e.g., HIPAA, FFIEC, PCI-DSS, etc.). The coverage component 114, which provides a first level analysis, can identify multiple set policies. For example, the coverage component 114 can identify a customer policy, which is in scope of a regulation and a standard, a customer policy, which is above and beyond the regulation and the standard, and a policy in the regulation and the standard which is not covered by the customer policy. If policies are in scope of the regulation and the standard, then other components (e.g., the compliance component 104, the optimization component 108, etc.) can be utilized. The policy database 106 can store the regulation data, in addition to various customer policies, for future use. Customer policies can include, but are not limited to: logging requirements, password requirements, credit card number requirements, etc. The coverage component 114 can also determine whether a customer policy is in scope or goes above and beyond a targeted regulation. For example, the customer logging mechanism can require that every log entry for any activity performed by the end user is prefixed with the end user's email address in order to determine any activity at a system and/or application level. This policy is not a part of any of the requirements for any of the existing regulations and standards (e.g., HIPAA, FFIEC, PCI-DSS, etc.). Thus, the coverage component 114 can determine that the policy is out of scope with the existing regulations and standards.” For policy type see par 036: “Additionally, some terms can have a higher relevance to specific types of polices. Therefore, an assessment of the terms of the electronic document can provide an indication of what type of policy is or should be associated with the electronic document.” Policy type teaching policy subject. In par 039 the deviation component teaches a risk score and a deviation. The risk score in par 039 and the in scope or out of scope in par 027 teach policy variables that provide boundaries because the boundary in par 039 is that a certain deviation has to be performed to lower a risk score and in pars 027-030 that policies are enforced (a boundary) based on various variables based on type of organization (see par 028) or other types (type of server). It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the compliance teaching of Bulut with the policy variables, subject, and boundaries teaching of Hernandez because Hernandez teaches that with multiple policies to follow and different time factors (immediately after a risk is assessed), if policies are not properly analyzed and implemented then a server device could be in noncompliance. Because Hernandez teaches a way to manage enforcement of policies based on these different and multiple inputs, one would be motivated to modify Bulut with Hernandez to comply with policy more effectively. Bulut does not teach receiving…from a client device…sending to a client device; testing the trained model using a portion of training data updated, wherein an error value is computed for each test sample and a distribution of test sample errors is computed; Durvasula teaches automated cloud data and technology solution delivery see par 002. Durvasula teaches receiving…from a client device…sending to a client device in par 092: “At block 414, for example, the I/O module 146 may transmit one or more configuration inquiries to the user via the network 110. For example, the customer may be using the client device 102 to receive the transmitted inquiries. The memory of the client device 102 may include a set of computer-executable instructions that receive the inquiries and display them in a graphical user interface, for example. The set of instructions in the client device 102 may collect the user's responses via an input device (e.g., a touchpad) and transmit the responses to each respective inquiry to the I/O module 146 via the network 110.” Durvusala then teaches testing the trained model using a portion of training data updated, wherein an error value is computed for each test sample and a distribution of test sample errors is computed in par 054: “In some aspects, training may be performed by successive evaluation (e.g., looping) of the network, using training labeled training samples. The process of training the ANN may cause weights, or parameters, of the ANN to be created. The weights may be initialized to random values. The weights may be adjusted as the network is successively trained, by using one of several gradient descent algorithms, to reduce loss and to cause the values output by the network to converge to expected, or “learned”, values. In an aspect, a regression may be used which has no activation function. Therein, input data may be normalized by mean centering, and a mean squared error loss function may be used, in addition to mean absolute error, to determine the appropriate loss as well as to quantify the accuracy of the outputs.” Then see par 0104: “Once a detailed objective of the future data and architecture state are approved by the customer, a validation ML validates the future data and architecture state for accuracy and completeness to generate a detailed future data and architecture state input template. If the machine learning model validation check fails, the customer may be is directed back to the detailed questionnaire to re-explain their objective in the context of the failure error/s. If the data and architecture landscape is not complete, the ML model may identify gaps and provide recommendations contingent on the customer's confirmation.” Then see par 0105: “Continuing the example, the NLP module 148 may, via the I/O module 146, transmit a message (e.g., an HTTP POST message) to the client computing device comprising a JavaScript Object Notation (JSON) payload including each identified objective and score. The client device 102 may parse and display the JSON to the user via a web page or other graphical user interface (not depicted).” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the compliance policy code and ML teaching of Bulut with the client device teaching of Durvasula because Durvasula teaches “conventional static visualization techniques that are shared across many organization are inefficient, because among other things, users are not able to apply filters and such visualizations do not update to keep pace with changes in data over time” in par 003. The solution Durvasula teaches is “modularizing and codifying processes for performing environmental discovery/scanning, environmental validation, and automated knowledge engine generation using machine learning (ML) and/or artificial intelligence (AI), including those existing processes on premises involving legacy technologies” in par 035. Because Durvasula teaches further efficiencies in implementing [blank] as a service which is related to the teachings of software policy compliance teaching of Bulut, one would be motivated to modify Durvasula with Bulut. Per claims 2 and 10, which are similar in scope, Bulut, Hernandez, and Durvasula teach the limitations of claims 1 and 9, above. Bulut further teaches generating the new policy action codes comprises: processing the new policy document to generate a new structured dataset of new policy actions in par 0106: “In some embodiments, based on receiving a new compliance policy P.sub.7, policy analyzer component 108 (e.g., via policy correlation ML model 302) can identify (e.g., as described above with reference to FIGS. 1, 2, and 3) one or more dependency relationships (denoted as correlated to in FIG. 4) and/or corresponding dependency relationship direction(s) between new compliance policy P.sub.7 and existing compliance policies P.sub.2 and P.sub.5 as illustrated in FIG. 4. In some embodiments, expert entity can validate or invalidate such one or more dependency relationships and/or corresponding dependency relationship direction(s) identified by policy analyzer component 108 (e.g., via policy correlation ML model 302) between new compliance policy P.sub.1 and existing compliance policies P.sub.2 and P.sub.5 as illustrated in FIG. 4.” Bulut then teaches and inputting the new policy actions to the trained model in par 0108: “In some embodiments, policy analyzer component 108 can collect (e.g., from database 304) and/or use such operational data as training data to actively learn (e.g., as described above with reference to FIGS. 1, 2, and 3) to identify one or more dependency relationships and/or corresponding dependency relationship direction(s) between a subsequently received new compliance policy and existing compliance policies.” Per claims 3 and 11, which are similar in scope, Bulut, Hernandez, and Durvasula teach the limitations of claims 2 and 10, above. Bulut further teaches sending new policy actions in par 0120: “For example, at a time scheduler component 110 can schedule simultaneous execution of compliance policies of execution plans E.sub.1 and E.sub.2 illustrated in FIG. 6A. In this example, execution plan E.sub.1 can comprise compliance policy P.sub.1 that can be executed by a first computing resource (e.g., a first computing resource of cloud computing environment 308) and execution plan E.sub.2 can comprise no compliance policies, as compliance policies P.sub.2 and P.sub.3 are dependent on compliance policy P.sub.1, and therefore, compliance policy P.sub.1 must be executed prior to execution of compliance policies P.sub.2 and/or P.sub.3.” Bulut does not teach to the client device. Durvasula to the client device teaches in par 092: “At block 414, for example, the I/O module 146 may transmit one or more configuration inquiries to the user via the network 110. For example, the customer may be using the client device 102 to receive the transmitted inquiries. The memory of the client device 102 may include a set of computer-executable instructions that receive the inquiries and display them in a graphical user interface, for example. The set of instructions in the client device 102 may collect the user's responses via an input device (e.g., a touchpad) and transmit the responses to each respective inquiry to the I/O module 146 via the network 110.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the compliance policy code and ML teaching of Bulut with the client device teaching of Durvasula because Durvasula teaches “conventional static visualization techniques that are shared across many organization are inefficient, because among other things, users are not able to apply filters and such visualizations do not update to keep pace with changes in data over time” in par 003. The solution Durvasula teaches is “modularizing and codifying processes for performing environmental discovery/scanning, environmental validation, and automated knowledge engine generation using machine learning (ML) and/or artificial intelligence (AI), including those existing processes on premises involving legacy technologies” in par 035. Because Durvasula teaches further efficiencies in implementing [blank] as a service which is related to the teachings of software policy compliance teaching of Bulut, one would be motivated to modify Durvasula with Bulut. Per claims 4, 12, and 18, Bulut, Hernandez, and Durvasula teach the limitations of claims 1, 9, and 17 above. Bulut further teaches processing the policy document to generate the structured dataset of policy actions comprises processing the policy document using a natural language processor to generate the labeled text in par 065 where the policy analyzer component 108 classifies the text being input: “For instance, policy analyzer component 108 can employ an automatic classification system and/or an automatic classification process to learn such one or more dependency relationships and/or corresponding dependency relationship direction(s) based on feedback data (e.g., expert entity feedback, operational data feedback, etc.).” This is done with NLP process as taught in pars 058-059: “To facilitate identification of such one or more dependency relationships and/or corresponding dependency relationship direction(s) based on such one or more similarities defined above, policy analyzer component 108 can employ a model defined above and/or a natural language extraction process to identify such similarities. For example, policy analyzer component 108 can employ a model defined above and/or a natural language extraction process including, but not limited to, natural language processing (NLP), named entity recognition (NER), natural language annotation, and/or another natural language extraction process. The policy analyzer component 108 can employ a model and/or a natural language extraction process to identify such similarities defined above based on a cosine similarity between vector representations of text extracted from, for instance, descriptions of compliance policies, codes of compliance policies, and/or a common weakness enumeration (CWE) system corresponding to respective compliance policies. For example, policy analyzer component 108 can employ a natural language extraction process defined above and/or a model that has been trained (e.g., via trainer component 202 as described below with reference to FIG. 2) to identify such similarities based on such a cosine similarity. In some embodiments, policy analyzer component 108 can employ a model defined above and/or a natural language extraction process to identify such similarities between a certain new compliance policy and existing compliance policies (e.g., historical compliance policies), where dependency relationships and/or corresponding directions of such dependency relationships between such existing compliance policies are known (e.g., dependency relationships and/or corresponding dependency relationship direction(s) that have been verified by, for instance, an expert entity, operational data, etc.).” The labeled text is generated using the ML model which includes the models described above in pars 0114-0118: “[0114] while True [0115] Predict a label for a given code C based on code and description similarity. [0116] Use feedback/operational data to validate that code C worked or not {l.sub.1, l.sub.2}. [0117] Update training data (D.sub.l) with policy P.sub.i and its label.” Per claims 5, 13, and 19, Bulut, Hernandez, and Durvasula teach the limitations of claims 4, 12, and 18, above. Bulut further teaches processing the policy document to generate the structured dataset of policy actions further comprises processing the labeled text using a classifier to identify policy subjects in par 065 where the policy analyzer component 108 classifies the text being input: “For instance, policy analyzer component 108 can employ an automatic classification system and/or an automatic classification process to learn such one or more dependency relationships and/or corresponding dependency relationship direction(s) based on feedback data (e.g., expert entity feedback, operational data feedback, etc.).” Bulut then teaches actions corresponding to the policy subjects, in par 065: “In one embodiment, policy analyzer component 108 can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to learn such one or more dependency relationships and/or corresponding dependency relationship direction(s) based on feedback data (e.g., expert entity feedback, operational data feedback, etc.).” Bulut then teaches and policy variables that provide boundaries for the actions in pars 0113-0117: “Predict a label for a given code C based on code and description similarity. Use feedback/operational data to validate that code C worked or not {l.sub.1, l.sub.2}. Update training data (D.sub.l) with policy P.sub.i and its label.” Per claims 6, 14, and 20, which are similar in scope, Bulut, Hernandez, and Durvasula teach the limitations of claims 4, 12, and 18, above. Bulut further teaches the policy document comprises a human-readable, natural- language text document that describes rules and guidelines to be followed by an enterprise in operating the enterprise, creating a product, or providing a service in par 018: “Given a set of compliance policies (e.g., policy_linux_file_permissions, policy_linux_pass_max_age, policy_linux_pass_min_age, etc.) for different platforms (e.g., Advanced Interactive eXecutive (AIX), Linux, Windows, etc.) some compliance policies depend on one or more other compliance policies. For example, given compliance policy P.sub.1—policy_linux_oracle_pass_max_age (set non-expiring password for an application; oracle database user) and compliance policy P.sub.2—policy_linux_pass_max_age (set expiring password in operating system (OS) level), compliance policy P.sub.2 can depend on compliance policy P.sub.1 (e.g., dependency: P.sub.1.fwdarw.P.sub.2). Execution of multiple compliance policies in synchronous order is time consuming. If dependencies between multiple compliance policies are known, execution of such policies can be performed asynchronously.” Per claims 7 and 15, which are similar in scope, Bulut, Hernandez, and Durvasula teach the limitations of claims 1 and 9, above. Bulut further teaches the policy document comprises a published standard issued by a standards organization in par 018 where linux, Oracle, and Advanced Interactive eXecutive is taught. Per claims 8 and 16, which are similar in scope, Bulut, Hernandez, and Durvasula teach the limitations of claims 1 and 9, above. Bulut further teaches the trained model is an artificial neural network, artificial intelligence model, or a machine learning model in par 066: “he policy analyzer component 108 can employ any suitable machine learning based techniques, statistical-based techniques, and/or probabilistic-based techniques to learn such one or more dependency relationships and/or corresponding dependency relationship direction(s) based on feedback data (e.g., expert entity feedback, operational data feedback, etc.). For example, policy analyzer component 108 can employ an expert system, fuzzy logic, support vector machine (SVM), Hidden Markov Models (HMMs), greedy search algorithms, rule-based systems, Bayesian models (e.g., Bayesian networks), neural networks, other non-linear training techniques, data fusion, utility-based analytical systems, systems employing Bayesian models, and/or another model.” Therefore, claims 1-20 are rejected under 35 USC 103. Response to remarks: 35 USC 101 Applicant argues: “In the existing technology, to set policy goals in the conventional way, appropriate policies are usually written to be in accordance with a specific the compliance standard that the company wishes to adhere to or that applies to a particular product. See Applicant's specification at paragraph [0016]. That is, these written policies are reviewed, often by outside experts, to ensure that the policies comply with the given standard. To validate that the technical components are in alignment, the steps are performed, and the results are documented and presented to compliance auditors. These processes are manual labor-intensive processes that are subject to human interpretation and judgement and are therefore prone to error. Embodiments of the present invention resolve the technological problem of manual labor- intensive processes of policy implementation and auditing. For example, depending on the degree of the change, the existing trained model may be used to update a corresponding product, and a small change such as a change to the variable values of a particular policy subject may be handled by inputting the updated standard to the existing trained model, while new sections or new policy subjects may require the generation of a new trained model, such as updating the training data based on the new policy document, and testing the trained model using a portion of training data updated based on the new policy document, where an error value is computed for each test sample and a distribution of test sample errors is computed, as recited in amended independent claim 1.” Examiner responds: Performing processes faster on a computer is similar to FairWarning which found that this is an applied use of a computer, See MPEP 2106.05(f)(2). Therefore this is not persuasive. Applicant then argues: “Accordingly, Applicant submits that the advantage of being able to streamline the policy implementation and auditing processes provides an improvement to the functioning of a computer, other technology, and/or technical field. As such, Applicant submits that the recitations of amended independent claim 1 are integrated into a practical application and are, therefore, patent- eligible.” Examiner responds: One ordinarily skilled would not recognize the speeding up of manual processes by using a computer to improve the computer or other technology itself as the computer is only used as a tool to speed up the processes. Therefore the 101 rejection is maintained. 35 USC 103 Examiner is not persuaded that in part Bulut par 098 does not teach this. Par 098 teaches training based on policy documents and then pars 0102 and 0108 teach using a new policy document to train and this is with references to Figs 1 and 2 or Figs 1, 2, and 3, of which par 098 describes. Therefore Bulut does teach using a new policy document to update the training data. Per the error limitations, examiner finds that Durvasula teaches this as shown above. Therefore, the rejection is maintained. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD W. CRANDALL whose telephone number is (313)446-6562. The examiner can normally be reached M - F, 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at (571) 270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD W. CRANDALL/ Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Dec 13, 2022
Application Filed
Mar 06, 2025
Non-Final Rejection — §101, §103
May 13, 2025
Response Filed
Jun 11, 2025
Final Rejection — §101, §103
Aug 13, 2025
Response after Non-Final Action
Sep 12, 2025
Response after Non-Final Action
Oct 20, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Oct 29, 2025
Non-Final Rejection — §101, §103
Dec 29, 2025
Interview Requested
Jan 12, 2026
Examiner Interview Summary
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 23, 2026
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602666
INFORMATION HANDLING SYSTEM MICRO MANUFACTURING CENTER FOR REUSE AND RECYCLING FACTORING INVENTORY
2y 5m to grant Granted Apr 14, 2026
Patent 12591589
DECENTRALIZED WILL MANAGEMENT APPARATUS, SYSTEMS AND RELATED METHODS OF USE
2y 5m to grant Granted Mar 31, 2026
Patent 12541382
USER PERSONA INJECTION FOR TASK-ORIENTED VIRTUAL ASSISTANTS
2y 5m to grant Granted Feb 03, 2026
Patent 12537090
METHOD AND SYSTEM FOR RULE-BASED ANONYMIZED DISPLAY AND DATA EXPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12530694
USING ENTITLEMENTS DEPLOYED ON BLOCKCHAIN TO MANAGE CUSTOMER EXPERIENCES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
30%
Grant Probability
64%
With Interview (+33.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 301 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month