Prosecution Insights
Last updated: April 19, 2026
Application No. 18/789,397

SECURITY POSTURE GENERATION USING AN ARTIFICIAL INTELLIGENCE (AI) MODEL

Non-Final OA §101§103
Filed
Jul 30, 2024
Examiner
ZOUBAIR, NOURA
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Google LLC
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
256 granted / 353 resolved
+14.5% vs TC avg
Strong +62% interview lift
Without
With
+61.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
17 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 353 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) receiving data, analyzing the data, providing an output and extracting a set of generated features from the output and implementing the extracted features or adding further features. These steps may be performed in the human mind and are therefore they are mental steps. This judicial exception is not integrated into a practical application because it is not evident that the implementation of the features or the addition of further features provides a benefit or improvement to an existing technology. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because a general recitation of a trained AI model, without recitations of how the model operates or how it is trained such that a human cannot perform its functions, is not sufficient to add elements that are more than the abstract ideas. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ayyadurai et al (US Patent No.12,198,030). Re Claim 1. Ayyadurai discloses a method comprising: providing a natural language description of a set of desired features of a security posture of an organization as a first input to a trained artificial intelligence (AI) model (i.e. train an ML model to construct validation actions that evaluate the AI application's compliance with the operation boundaries, …………………….vector constraints 102 are obtained by manual input by users. For example, users input relevant regulations and policies (e.g., vector constraints 102) directly into the validation engine 104 through a user interface communicatively connected to the validation engine 104……….For example, even if the vector constraints 102 exist in different formats and structures, Natural Language Processing (NLP) techniques can be used to parse each text and identify key regulations, policies, and practices embedded within the differently formatted vector constraints 102. The validation engine 104 can identify specific terms, phrases, or clauses that likely denote regulatory requirements, as well as understand the context and intent behind the provisions….., the vector constraints 102 are categorized and tagged based on the extent of the vector constraint's relevance to different aspects of AI compliance (e.g., fairness, transparency, privacy, security)) [Ayyadurai, (28, 39-41)]; providing telemetry data pertaining to a computing environment of the organization as a second input to the trained AI model; obtaining one or more outputs from the trained AI model (i.e. The AI application 308 processes the command set and generates an outcome 310 and explanation 312 on how the outcome 310 was determined based on the AI application's 308 internal algorithms and decision-making processes. The outcome 310 and explanation 312 are evaluated by the assessment module 314, which compares the outcome 310 and explanation 312 against the expected outcomes and explanations specified in the test case 304 derived from the relevant guidelines 302) [Ayyadurai, (60), Note: the output of the AI application is telemetry data and is input into the ML/AI model]; and extracting, from the one or more outputs, a set of generated features for the security posture of the organization (i.e. Once the ML model is trained, the ML model can receive responses generated by the AI model in response to a given command set. The responses of the AI model can contain alphanumeric characters, and the ML model can identify the vector representations associated with each character within the response. …..…… ,In some implementations, the system can use a probabilistic approach, where the ML model assigns likelihood scores to different alignment indicators based on the observed frequency and significance ……….The meta-model 810 evaluates the AI application's compliance with the vector constraints through the use of validation actions 812 (e.g., using semantic search, pattern recognition, and machine learning techniques). Further evaluation methods in determining compliance of AI applications are discussed with reference to FIGS. 5-7) [Ayyadurai, (95, 99, 112, 127)], (i.e. allows organizations to identify areas of concern within the AI model and take appropriate actions to mitigate risks, ensure compliance) [Ayyadurai, (67)]. Ayyadurai does not disclose all the above in one embodiment however it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the embodiments because: The elements and acts of the various examples described above can be combined to provide further implementations of the technology [Ayyadurai, col.41]. This motivation applies to the dependent claims. Re Claims 8 and 15. these claims recite features similar to those in claim 1, therefore they are rejected in a similar manner. Re Claims 2, 9 and 16. Ayyadurai discloses the features of claims 1, 8 and 15, further comprising: determining whether the set of generated features satisfies a security threshold criterion; and responsive to determining the set of generated features satisfies the security threshold criterion, implementing the set of generated features in the computing environment of the organization (i.e. training is carried out iteratively until a convergence condition is met (e.g., a predefined maximum number of iterations has been performed, or the value outputted by the AI model 1230 is sufficiently converged with the desired target value), after which the AI model 1230 is considered to be sufficiently trained. The values of the learned parameters are then fixed and the AI model 1230 is then deployed to generate output in real-world applications) [Ayyadurai, (156)]. Re Claims 3, 10 and 17. Ayyadurai discloses the features of claims 1, 8 and 15, further comprising: determining whether the set of generated features satisfies a security threshold criterion based on a security specification (i.e. the vector constraints 102 can be encoded into a structured representation (e.g., JSON, XML), with specific fields for criteria, requirements, and/or thresholds. In some implementations, the vector constraints 102 are categorized and tagged based on the extent of the vector constraint's 102 relevance to different aspects of AI compliance (e.g., fairness, transparency, privacy, security)) [Ayyadurai, (41)]; and responsive to determining that the set of generated features does not satisfy the security specification, adding one or more additional features to the set of generated features (i.e. The set of generated validation actions 908 is provided as input to an AI application 910 in the form of a prompt. The AI application 910 processes the validation actions 908 and produces an outcome along with an explanation 912 detailing how the outcome was determined. Subsequently, based on the outcome and explanation 912 provided by the AI application 910, the system can generate recommendations 914 for corrective actions. The recommendations are derived from the analysis of the validation action outcomes and aim to address any identified issues or deficiencies. For example, if certain validation actions fail to meet the desired criteria due to specific attribute values or patterns, the recommendations can suggest adjustments to those attributes or modifications to the underlying processes…………… if certain attributes exhibit unexpected associations or distributions, the system can retrain the tested AI model with revised weighting schemes to better align with the desired vector constraints) [Ayyadurai, (130-131)]. Re Claims 4, 11 and 18. Ayyadurai discloses the features of claims 1, 8 and 15, further comprising: providing the set of generated features as an input to a second trained AI model; obtaining one or more outputs from the second trained AI model; and extracting from the one or more outputs, an indication of a validity of the set of generated features (i.e. In some implementations, where hyperparameters are used, a new set of hyperparameters is determined based on the measured performance of one or more of the trained ML models, and the first act of training (i.e., with the training set) begins again on a different ML model described by the new set of determined hyperparameters. The steps are repeated to produce a more performant trained ML model. Once such a trained ML model is obtained (e.g., after the hyperparameters have been adjusted to achieve a desired level of performance), a third act of collecting the output generated by the trained ML model applied to the third subset (the testing set) begins in some implementations. The output generated from the testing set, in some implementations, is compared with the corresponding desired target values to give a final assessment of the trained ML model's accuracy) [Ayyadurai, (155)]. Re Claims 6, 13 and 20. Ayyadurai discloses the features of claims 1, 8 and 15, further comprising: causing a visual representation of the set of generated features to be visually rendered via a graphical user interface (GUI) associated with a prompt to confirm whether the set of generated features satisfy a security threshold criterion (i.e. HITL validation 1106 allows users to provide feedback and annotations on the validation engine's 1104 conclusions and recommendations, assessing the validation engine 1104 for accuracy, fairness, and/or ethical compliance. The user feedback helps further ensure the AI application's 1102 compliance with regulatory requirements. In some implementations, the system includes user interfaces and feedback mechanisms that allow users to review the validation engine's 1104 conclusions and recommendations. For example, the system can include dashboard interfaces for visualizing the validation engine 1104's outputs, annotation tools for highlighting potential issues, and communication channels between users for adjusting the operational parameters of the validation engine) [Ayyadurai, (144)]. Re Claims 7, 14. Ayyadurai discloses the features of claims 1 and 8, further comprising: determining whether the set of generated features satisfies a security threshold criterion; and responsive to determining the set of generated features does not satisfy the security threshold criterion, extracting, from the one or more outputs, a second set of generated features for the security posture of the organization (i.e. The set of generated validation actions 908 is provided as input to an AI application 910 in the form of a prompt. The AI application 910 processes the validation actions 908 and produces an outcome along with an explanation 912 detailing how the outcome was determined. Subsequently, based on the outcome and explanation 912 provided by the AI application 910, the system can generate recommendations 914 for corrective actions. The recommendations are derived from the analysis of the validation action outcomes and aim to address any identified issues or deficiencies. For example, if certain validation actions fail to meet the desired criteria due to specific attribute values or patterns, the recommendations can suggest adjustments to those attributes or modifications to the underlying processes…………… the ML model discussed in FIG. 6, the corrective actions can include implementing post-processing techniques in the tested AI model to filter out responses that violate the vector constraints (e.g., filtering out responses that include the identified vector representations of the alphanumeric characters)…………………………………………… use the overall metric to generate a set of actions to remove a portion of the set of responses generated by the AI model) [Ayyadurai, (130-131), (claim 1)]. Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ayyadurai et al (US Patent No.12,198,030) and further in view of Jain et al (US Patent No.12,106,205). Re Claims 5, 12 and 19. Ayyadurai discloses the features of claims 4, 11 and 18, Ayyadurai does not explicitly disclose whereas Ayyadurai in view of Jain does: further comprising: providing the natural language description of the set of features of the security posture of the organization as a second input to the second trained AI model (i.e. data generation platform 102 can generate a resulting output (e.g., generated code or natural language data) in response to a query submitted by the user within the prompt………… can validate the output from the LLM. For example, the data generation platform 102 provides the output to an output validation model to generate a validation indicator associated with the output. …….the data generation platform 102 can validate the output of the LLM to prevent security breaches or unintended behavior. For example, the data generation platform 102 can review output text using a toxicity detection model and determine an indication of whether the output is valid or invalid. In some implementations, the data generation platform 102 can determine a sentiment associated with the output and modify the output e.g. by resubmitting the output to the LLM) [Jain, (103,104)]. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify Ayyadurai with Jain in order to ensure the accuracy, utility, and reliability of generated data [Jain, 104]. Prior art made of record however not relied upon includes: Radmilac et al (US Pub.No.2024/0411751) discloses a meta-model topology comprising a plurality of models which are configured to take in an initial seed prompt or intermediary prompt and generate new outputs based on the initial seed prompt or intermediary prompt and a plurality of models which are configured to process the new outputs. For example, the meta-model topology is configured to include models which call to one or more different nodes or layers of a large language model [0045]. Nissan (US Pub.No.2025/037366) describes a method for matching security recommendation tasks with a regulatory compliance standard, said method including: receiving as input at a first Machine Learning (ML) model a regulatory compliance standard; receiving as input at the first ML model security recommendation tasks; determining by the first ML model a distance matrix defining a threshold of alignment that specifies a distance between the security recommendation tasks and the regulatory compliance standard; based on the distance matrix, identifying a predetermined number N of the security recommendation tasks that are within the threshold of alignment, generating a prompt, the prompt including the predetermined number N of the security recommendation tasks and the regulatory compliance standard; inputting the prompt to a second ML model; and based on the prompt, identifying by the second ML model a subset of the predetermined number N of the security recommendation tasks that match the regulatory compliance standard [0004]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOURA ZOUBAIR whose telephone number is (571)270-7285. The examiner can normally be reached Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached at 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596790
Secure Environment Public Register (SEPR)
2y 5m to grant Granted Apr 07, 2026
Patent 12591664
System and method for remote users activities administration
2y 5m to grant Granted Mar 31, 2026
Patent 12574420
DYNAMIC POLICY AND NETWORK SECURITY ZONE GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12563098
System and method for performing a secured operation
2y 5m to grant Granted Feb 24, 2026
Patent 12549608
CENTRALIZED SECURITY POLICY ADMINISTRATION USING NVMe-oF ZONING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+61.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 353 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month