Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,079

GOVERNING USAGE OF AN ARTIFICIAL INTELLIGENCE TECHNOLOGY

Final Rejection §103
Filed
Sep 15, 2023
Examiner
WORKU, SARON MATTHEWOS
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
International Business Machines Corporation
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
12 granted / 18 resolved
+8.7% vs TC avg
Strong +54% interview lift
Without
With
+53.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
37.0%
-3.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Detailed Action This office action is in response to applicant’s submission filed on January 14, 2026. Claims 1-2, 4, 6-8, and 10-23 are pending and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is in response to the amendment filed on January 14, 2026. The Examiner has acknowledged the amended claims 1, 4, 18, and 22. Claims 3, 5, and 9 were previously canceled. Claims 1-2, 4, 6-8, and 10-23 are pending and are rejected. Response to Arguments Applicant’s Arguments (Remarks) filed January 14, 2026 have been fully considered, but are moot. Note that this action is made FINAL. See MPEP § 706.07(a). Applicant’s arguments with respect to claims 1, 4, 18, and 22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See also 103 rejection below. The remainder of the arguments set forth by the applicant are not persuasive due to the new grounds of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 7-8, 10, 15, 17-18, and 23 are rejected under 35 U.S.C. 103 as being unpatentable by US 2022/0147597 A1 to Bhide et al. (hereinafter, “Bhide”) in view of US 2022/0417613 A1 to Vikram et al. (hereinafter, “Vikram”). Regarding claim 1, Bhide discloses: A computer-implemented system, comprising: a memory that stores computer-executable components; and a processor, operably coupled to the memory, and that executes at least one of the computer-executable components that (“memory 225, storage 230, an interconnect (e.g., BUS) 220, one or more processors 205” [0032]): maintains a data structure comprising trained artificial intelligence models (“AI model” [0023]; “In addition to the model metrics, system 100 can utilize other model attributes in the AI governance policy engine to analyze a model. One such example is the training data used to build the model. A data scientist could make use of non-governed data to build and train the model, but the model could be erroneously labelled as having been built using governed data” [0028]), artificial intelligence use cases, and artificial intelligence usage policies (“In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112. In other embodiments, uploading a model to the repository 106 can automatically trigger a command sent to the control unit 108 to analyze the newly uploaded model against the rules and policies 112. If the AI governance policy engine detects any rule or policy violations, a notification is sent to one or more of the client devices 102 to notify the user of the violation. In this way, the user can correct or update the model to comply with the rules. This reduces the likelihood that the model will be rejected when a human validator reviews the model, thus saving time in the model development cycle” [0025]), wherein the artificial intelligence usage policies govern usage of the trained artificial intelligence models when deployed for use by consumers In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112. In other embodiments, uploading a model to the repository 106 can automatically trigger a command sent to the control unit 108 to analyze the newly uploaded model against the rules and policies 112. If the AI governance policy engine detects any rule or policy violations, a notification is sent to one or more of the client devices 102 to notify the user of the violation. In this way, the user can correct or update the model to comply with the rules. This reduces the likelihood that the model will be rejected when a human validator reviews the model, thus saving time in the model development cycle” [0025] [Examiner notes that the control unit 108 analyzes uploaded models against rules/policies which shows the governance step as model cannot just be deployed freely; they must comply with usage policies first. By detecting policy violations and notifying the user to correct them, the system ensures problematic models do not move forward, reducing the change of a harmful model being deployed (e.g., inaccurate, biased, or unsafe behavior) as the function of rejecting noncompliant models implicitly serves that purpose. Also, by catching violations before deployment, it prevents harmful models from reaching consumer environments in the first place. The repository + validation step is a gatekeeper to ensure only compliant models can reach those environments]); analyzes a proposed artificial intelligence use case received from a consumer for a computing environment accessible by the consumer (“In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112. In other embodiments, uploading a model to the repository 106 can automatically trigger a command sent to the control unit 108 to analyze the newly uploaded model against the rules and policies 112. If the AI governance policy engine detects any rule or policy violations, a notification is sent to one or more of the client devices 102 to notify the user of the violation. In this way, the user can correct or update the model to comply with the rules. This reduces the likelihood that the model will be rejected when a human validator reviews the model, thus saving time in the model development cycle” [0025]; [Examiner notes that this text shows similarities such as how the control unit evaluates a model against a set of rules and policies when it is stored in the repository. This automated analysis ensures compliance and identifies violations to produce an analyzed result, aligning with the claim language. This text that the data scientist uploads the model which is effectively the consumer submitting a use case. Here, the “use case” is essentially the AI model that the data scientist has built because the system is analyzing the uploaded model to ensure it complies with rules and policies (use case = the consumer’s intended AI functionality or task represented by the model]; “At 302, a model stored in a repository, such as repository 106, is identified as a model to be validated. For example, in some embodiments, a model to be validated is identified based on user instructions received via an application programming interface (API). The user instructions can include a command or request to initiate validation and specify the model to be validated. In other embodiments, identifying the model to be validated can include detecting that a model has been pushed to the repository. Thus, in such embodiments, the validation is initiated automatically in response to detecting that the model has been pushed to the repository. In other embodiments, identifying the model is part of a periodic validation check. For example, the control unit 108 can periodically perform recurring validation checks on models stored in the repository 106 according to a schedule. In this way, changes to rules and/or policies after an initial validation check can be captured and used to re-evaluate the models stored in the repository 106. In some embodiments, the control unit 108 is configured to detect changes to an encoded rule and/or policy and, in response to detecting such a change, the control unit 108 is configured to identify models stored in the repositor 106 which have not been evaluated since the detection of the change. Those models which have not been evaluated after the detected change, are then identified as models to be validated” [0037] [Examiner notes that this second text is brought in because it explicitly supports that the consumer actively sends instructions specifying which model and potentially how it could be used. The system then selects a model for validation based on the consumer’s submission or instructions. It also shows that different models can be selected depending on when they are submitted or re-evaluated]); based on the analysis of the proposed artificial intelligence use case, selects from the data structure: trained artificial intelligence model of the trained artificial intelligence models to employ for the use case by the consumer when deployed in the computing environment (“The control unit 108 is configured to implement an AI governance policy engine. For example, the control unit 108 can be configured to implement machine learning techniques to evaluate a model against the rules and policies 112 which are encoded into the AI governance policy engine. Example machine learning techniques can include can comprise algorithms or models that are generated by performing supervised, unsupervised, or semi-supervised training on a dataset. Machine learning algorithms can include, but are not limited to, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity/metric training, sparse dictionary learning, genetic algorithms, rule-based learning, and/or other machine learning techniques. For example, the machine learning algorithms can utilize one or more of the following example techniques: K-nearest neighbor (KNN), learning vector quantization (LVQ)” [0021-0022] [Examiner notes that this text describes a variety of machine learning algorithms and models (e.g., neural networks, decision trees, etc.) that form a set of available AI technologies. This matches the idea that there are multiple options to choose from. The governance policy engine is designed to implement ML techniques and evaluate models based on rules and policies. This show that the selection of AI techniques or models could depend on the specific needs or context of the proposed use, such as supervised learning for labeled data or clustering for unsupervised data]; “In addition to the model metrics, system 100 can utilize other model attributes in the AI governance policy engine to analyze a model. One such example is the training data used to build the model. A data scientist could make use of non-governed data to build and train the model, but the model could be erroneously labelled as having been built using governed data” [0028] [Examiner notes that this text talks about selecting or creating a set of rules which corresponds to having multiple possible usage policies that can be chosen/customized. The AI governance engine uses those rules to analyze a model (application of a usage policy to a model). Also, the text explicitly says the rules can vary depending on the purpose for which the models are being built which illustrates a case-specific selection of policies which is a check to ensure safe models before being deployed in the computing environment)], and an artificial intelligence usage policy of the artificial intelligence usage policies to employ with the trained artificial intelligence model for the use case by the consumer when deployed in the computing environment (“The set of rules to be applied can be selected or created by a user, such as Chief Risk Officer, and can vary in different embodiments depending on the purpose for which the models are being built” [0024]); based on the proposed artificial intelligence use case, the trained artificial intelligence model, and the artificial intelligence usage policy: identifies a set of risks associated with The AI governance policy engine enables a user, such as the Chief Risk Officer of an organization, to define a set of AI Governance rules and policies that are to be enforced in the organization. Typically, these are the same rules which are validated by human model validators. Some sample rules can include, but are not limited to, a rule that an AI model should only be built using governed data. As used herein, governed data is data that has been through cleansing process and complies with a selected quality standard. In other words, governed data is data that has undergone specified quality checks. Governed data can be stored in governed catalogs such that data used from the governed catalogs is ensured to have gone through the specified quality checks. Non-governed data is typically easily accessible. For example, a data scientist can make use of data which is easily available on their laptop or a common machine. However, such data is not necessarily governed data and, thus, poses quality concerns. Hence, in some embodiments, the system described herein ensures that models are built only using governed data which is available in a governed catalog and has undergone proper data quality checks” [0030] [Examiner notes that this text describes a systematic evaluation process, similar to analyzing the proposed use of AI technology, as it focuses on ensuring that the AI use at hand complies with predefined rules and policies. The detection of violations implies that the system evaluates risks associated with the AI’s use, identifying when certain actions or uses pose potential compliance or ethical concerns. The rules and policies defined by the CRO are intended to assess and mitigate risks associated with the AI’s use. This aligns with the level of risk being determined based on the proposed use of the AI technology. The CRO’s role is critical in setting the level of acceptable risk and ensuring that AI technologies are used within these bounds, whether if its low or high risk. The governance engine ensure that the models meet specific standards, which directly relates to risk determination by ensuring the proposed use adheres to safety and compliance requirements]), and generates a set of usage restrictions for the usage of the proposed artificial intelligence use case with the trained artificial intelligence model by the consumer when deployed in the computing environment to mitigate the set of risks; and controls, according to the set of usage restrictions, the usage of the trained artificial intelligence model for the proposed artificial intelligence use case by the consumer when deployed in the computing environment (“In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112. In other embodiments, uploading a model to the repository 106 can automatically trigger a command sent to the control unit 108 to analyze the newly uploaded model against the rules and policies 112. If the AI governance policy engine detects any rule or policy violations, a notification is sent to one or more of the client devices 102 to notify the user of the violation. In this way, the user can correct or update the model to comply with the rules. This reduces the likelihood that the model will be rejected when a human validator reviews the model, thus saving time in the model development cycle” [0025] [Examiner notes that this text shows a governing component as it takes the analyzed use and enforces governance (compliance). It governs the use of the model by notifying the user of violations and requiring corrections (aligning with the governing component). Therefore this excerpt aligns with both the analyzing and governing components because it first analyzed the proposed use (the uploaded model) and then enforces the rules generated, prompting them to correct their use to comply with the policy]). Bhide does not explicitly disclose: wherein the artificial intelligence usage policies govern usage of the trained artificial intelligence models when deployed for use by consumers and are directed to preventing potential harm to people resulting from executing the trained artificial intelligence models by the consumers when deployed in computing environments accessible by the consumers. However, Vikram discloses: wherein the artificial intelligence usage policies govern usage of the trained artificial intelligence models when deployed for use by consumers and are directed to preventing potential harm to people resulting from executing the trained artificial intelligence models by the consumers when deployed in computing environments accessible by the consumers (“In embodiments, as contemplated herein a processor (e.g., via governance system) may generate the one or more rules using a variety of methods. In some embodiments, a processor may utilize an AI rule engine to generate one or more rules. While in some embodiments, the AI rule engine is a subcomponent of the AI engine (e.g., The AI engine if trained to perform AI rule assignment) enforcing the one or more rules of the governance system, in other embodiments, the AI rule engine is a separately trained AI system. In embodiments, the AI rule engine may analyze historical data of the participant network, such as the various types of metadata and information compiled within the blockchain, to generate the one or more rules. In these embodiments, a processor may task the AI rule engine to identify one or more rules that may accomplish one or more system goals. For example, a processor could task the AI rule engine to develop rules that may prevent and/or limit the exposure of nefarious deep fakes throughout the participant network. In embodiments, the AI rule engine may produce one or more rules associated with particular participants, subsets of participants, all of the participants, or any combination thereof. While in some embodiments, a participant who generated or created the media data (e.g., owner of the media data) may be able define one or more rules that should be applied to their particular media data, in other embodiments, the AI rule engine may automatically generate a selection of rules that a participant/owner may select and customize the one or more rules that apply to the media data. For example, the AI rule engine could automatically generate Rule A, Rule B, and Rule C for a particular media data (e.g., image) and the participant/owner could select Rule A and Rule C to apply to their particular media data” [0034] [Examiner notes that by “developing rules that may prevent or/or limit the exposure of nefarious deep fakes…” it shows that the system is purposefully designed/directed to prevent harm since nefarious deep fakes are a way to disinform and use non-censual use of likeness of individuals (categories of harm to people that AI causes as later stated by in claim 22)]). It would have been obvious to one of ordinary skill in the art before the priority date to modify Bhide with the structure of Vikram for the purpose of providing a protected space where safeguarding the AI system lanes from security attacks is possible. Claim 4 recites substantially the same limitation as claim 1, the form of a computer-implemented method for implementing the corresponding system, therefore it is rejected under the same rationale. Claim 18 recites substantially the same limitation as claim 1, the form of a computer program product comprising a computer readable storage medium having stored thereon instructions for implementing the corresponding system, therefore it is rejected under the same rationale. Regarding claim 2, a combination of Bhide-Vikram discloses the system of claim 1. Bhide further discloses: wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on: a hosting characteristic of the trained artificial intelligence model (“Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes” [0062-0067] [Examiner notes that the hosting characteristic refers to the type or configuration of the environment where the AI technology is hosted. The deployment models provide a detailed framework for various hosting characteristics]), and a security characteristic of the hosting characteristic (“In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides access to the cloud computing environment for consumers and system administrators. Service level management 84 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA” [0072] [Examiner notes that this addition mentions identity verification for cloud consumers and tasks as well as protection for data and other resources. These directly relate to security characteristics because they define technical safeguards that protect the hosting environment. The security functions describes are tied to the management layer of the cloud computing environment, which is inherently part of the hosting characteristic. By integrating these safeguards with resource provisioning and management, the hosting environment’s security is addressed. The SLA planning and fulfillment ensure that required security measures are part of these agreements, adding an operational governance to security]). Regarding claim 7, a combination of Bhide-Vikram discloses the system of claim 4. Bhide further discloses: wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on a controller entity of the trained artificial intelligence model (“In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112” [0025]. Regarding claim 8, a combination of Bhide-Vikram discloses the system of claim 7. Bhide further discloses: wherein the policy governs use of the trained artificial intelligence model based on a security characteristic of the controller entity (“In addition, the control unit 108 is configured, in some embodiments, to automatically evaluate a model against the set of rules in response to detecting that the model has been stored in the repository 106. After building a model, a data scientist typically uploads the model to a repository, such as repository 106. The control unit 108 can be configured, in some embodiments, to periodically check for models uploaded to the repository since a last check and to analyze any newly discovered models against the rules and policies 112. In other embodiments, uploading a model to the repository 106 can automatically trigger a command sent to the control unit 108 to analyze the newly uploaded model against the rules and policies 112. If the AI governance policy engine detects any rule or policy violations, a notification is sent to one or more of the client devices 102 to notify the user of the violation. In this way, the user can correct or update the model to comply with the rules. This reduces the likelihood that the model will be rejected when a human validator reviews the model, thus saving time in the model development cycle” [0025] [Examiner notes that this text focuses specifically on governing technology through rules and policies, with explicit mention of how a controller entity (AI governance engine) ensures compliance. It includes mechanisms to enforce security policies such as validating models and data, which ties directly to the concept of security characteristics of the controller entity. Governance is central to the text, making it relevant to the idea of a second policy (this thought process applies to the policies following the “second policy”]). Regarding claim 10, a combination of Bhide-Vikram discloses the system of claim 4. Bhide further discloses: wherein the analyzing of the proposed artificial intelligence use case further results in a determination of a level of risk associated with the proposed artificial intelligence use case by the consumer for the trained artificial intelligence model, and wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on the level of risk (“The AI governance policy engine enables a user, such as the Chief Risk Officer of an organization, to define a set of AI Governance rules and policies that are to be enforced in the organization. Typically, these are the same rules which are validated by human model validators. Some sample rules can include, but are not limited to, a rule that an AI model should only be built using governed data. As used herein, governed data is data that has been through cleansing process and complies with a selected quality standard. In other words, governed data is data that has undergone specified quality checks. Governed data can be stored in governed catalogs such that data used from the governed catalogs is ensured to have gone through the specified quality checks. Non-governed data is typically easily accessible. For example, a data scientist can make use of data which is easily available on their laptop or a common machine. However, such data is not necessarily governed data and, thus, poses quality concerns. Hence, in some embodiments, the system described herein ensures that models are built only using governed data which is available in a governed catalog and has undergone proper data quality checks” [0030] [Examiner notes that this text describes a systematic evaluation process, similar to analyzing the proposed use of AI technology, as it focuses on ensuring that the AI use at hand complies with predefined rules and policies. The detection of violations implies that the system evaluates risks associated with the AI’s use, identifying when certain actions or uses pose potential compliance or ethical concerns. The rules and policies defined by the CRO are intended to assess and mitigate risks associated with the AI’s use. This aligns with the level of risk being determined based on the proposed use of the AI technology. The CRO’s role is critical in setting the level of acceptable risk and ensuring that AI technologies are used within these bounds, whether if its low or high risk. The governance engine ensure that the models meet specific standards, which directly relates to risk determination by ensuring the proposed use adheres to safety and compliance requirements]). Regarding claim 15, a combination of Bhide-Vikram discloses the system of claim 4. Bhide further discloses: wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on a governmental regulation of a result of the proposed artificial intelligence use case with the trained artificial intelligence model (“The AI governance policy engine enables a user, such as the Chief Risk Officer of an organization, to define a set of AI Governance rules and policies that are to be enforced in the organization. Typically, these are the same rules which are validated by human model validators. Some sample rules can include, but are not limited to, a rule that an AI model should only be built using governed data. As used herein, governed data is data that has been through cleansing process and complies with a selected quality standard. In other words, governed data is data that has undergone specified quality checks” [0023] [Examiner notes that this text emphasizes how the AI system adjusts based on previous experiences, which could be related to consumer-specific behavior (environment based on human experience)]). Regarding claim 17, a combination of Bhide-Vikram discloses the system of claim 4. Bhide further discloses: generating, by the system, a mitigation rule based on the proposed artificial intelligence use case with the trained artificial intelligence model by the consumer, wherein the artificial intelligence usage policy comprises an policy that facilitates use of the trained artificial intelligence model based on an application of the mitigation rule to the proposed artificial intelligence use case by the consumer of the trained artificial intelligence model (“Other example rules include a rule that an AI model should have fairness metric above a given threshold, e.g. 80%, and a rule that an AI model should have a quality metric above a given threshold, e.g. 90%. Other rules can be related to explainability, data drift, etc. Rules for validating models are known to one of skill in the art and not explained in more detail herein. The set of rules to be applied can be selected or created by a user, such as Chief Risk Officer, and can vary in different embodiments depending on the purpose for which the models are being built. The set of rules are encoded in the AI governance policy engine implemented by the control unit 108 such that the AI governance policy engine is able to validate a model against these rules. For example, the AI governance policy engine can check if the model was built using governed data or not. One example technique for performing this check on governed data can be performed by comparing a list of approved governed data catalogs to metadata information on the data used in the model that is captured when the model is built. For example, a data scientist cam specify the training data used to build the model. This can be a data asset which is present in the project in which the model is being built. Thus, model development tools maintain a concept of projects which contain all the artefacts which are used to build the model. Data assets in the project can be copied from a governed catalog. The AI governance policy engine checks that the data asset used to build the model has been copied from a governed catalog” [0024]). Claim 23 recites substantially the same limitation as claim 10, the form of a computer-implemented system for implementing the corresponding method, therefore it is rejected under the same rationale. Claims 6, 11-14, 16, and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0147597 A1 to Bhide et al. (hereinafter, “Bhide”) in view of US 2022/0417613 A1 to Vikram et al. (hereinafter, “Vikram”) and in further view of US 2022/0036153 A1 to O’Malia et al. (hereinafter, “O’Malia”). Regarding claim 6, a combination of Bhide-Vikram discloses the system of claim 4. Bhide-Vikram do not explicitly disclose: wherein the artificial intelligence usage policy comprises a policy that governs content of a prompt submitted to the trained artificial intelligence model in furtherance of the proposed artificial intelligence use case. However, O’Malia discloses: wherein the artificial intelligence usage policy comprises a policy that governs content of a prompt submitted to the trained artificial intelligence model in furtherance of the proposed artificial intelligence use case (“In a third Stage, the Priming Module 110 may also incorporate optimization processes that condition, translate, or otherwise transform the language representation outputs produced by the Visual/Natural Language Mapping Module 108 before the language representation outputs are transferred to the ULLM 114. For example, the Discriminator 118 of the Priming Module 110 may block or discard certain types of information. This optional third Stage may use data regarding the performance or effectiveness of previous data exchanges between environment of the Al Agent 112 and the ULLM 114 in order to manipulate the data provided to the ULLM 114 in order to improve the likelihood that the ULLM 114 will generate outputs which improve the performance of the Al Agent 112” [0049] [Examiner notes that process of conditioning and filtering information before passing it to the model can be considered governing the content of the prompt because the module decides what information is relevant and prioritizes it. The conditioning step ensures that only useful or appropriate data is fed into the model, which aligns with the idea of governing prompt content by ensuring only relevant and policy-compliant data is used]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to combine the method of Bhide-Vikram with the added structure of O’Malia because the Al Agent Controller provides a system which encodes human knowledge and association frameworks in a framework which enables the Al Agent to leverage such knowledge and associations in its action or policy selection(s). Giving the Al Agent the ability to access “commonsense reasoning” is a key unsolved problem in Al systems [O’Malia 0054]. Regarding claims 11 and 19, a combination of Bhide-Vikram discloses the system of claims 4/18. Bhide-Vikram do not explicitly disclose: wherein the trained artificial intelligence model comprises a generative language model, and wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on content predicted to be generated in the proposed artificial intelligence use case by the generative language model. However, O’Malia discloses: wherein the trained artificial intelligence model comprises a generative language model, and wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on content predicted to be generated in the proposed artificial intelligence use case by the generative language model (“In a third Stage, the Priming Module 110 may also incorporate optimization processes that condition, translate, or otherwise transform the language representation outputs produced by the Visual/Natural Language Mapping Module 108 before the language representation outputs are transferred to the ULLM 114. For example, the Discriminator 118 of the Priming Module 110 may block or discard certain types of information. This optional third Stage may use data regarding the performance or effectiveness of previous data exchanges between environment of the Al Agent 112 and the ULLM 114 in order to manipulate the data provided to the ULLM 114 in order to improve the likelihood that the ULLM 114 will generate outputs which improve the performance of the Al Agent 112. An example of the conditioning step may include discarding information which is unlikely to be relevant to the Al Agent's decision-making process, or favoring the delivery of novel or changing information which may be more critical to the Al Agent's short-term action selection. The conditioning step may also include the prioritization of information which matches certain key mental models or abstract concepts which the ULLM 114 is deemed or predicted to process effectively, such as certain slot-filling tasks in which a proven ULLM mental model framework may be used to convert a certain type of information into a robust prediction for Al Agent action selection” [0049] [Examiner notes that the Priming Module 110 is describes as conditioning the content before it even reaches the generative model (ULLM). It includes blocking or discarding certain types of content, which directly governs what the generative language model will produce, ensuring that the output aligns with the predicted content policies or rules. This is a clear form of governing the AI use based on the content that is predicted to be generated, as the Priming Module ensures that only acceptable content is processed and passed on for generation]). The reasons of obviousness have been noted in the rejection of claim 6 above and applicable herein. Regarding claim 12, a combination of Bhide-Vikram discloses the system of claim 4. Bhide-Vikram do not explicitly disclose: wherein the trained artificial intelligence model comprises a chain of linked trained artificial intelligence models. However, O’Malia discloses: wherein the trained artificial intelligence model comprises a chain of linked trained artificial intelligence models (“Traditionally, Deep Learning, Reinforcement Learning, and Imitation Learning Algorithms, Models, or Agents (“Agents”), also known as Al Agents or Neural Networks, are designed to take actions and/or make decisions in a given domain in order to attain a reward or achieve a goal, and learn through experience to do this increasingly successfully. Typically, the Agent takes an action, or observes an action or a number of action sequences for a given environment state in the context of a goal, which may be known or unknown to the Agent” [0003]; “The ULLM may use a class of natural language processing models (such as GPT-3 from OpenAI, introduced in Language Models are Few-Shot Learners, Brown et al., 2020) based on an approach pioneered in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018) which combines a deep learning technique called attention in combination with a deep learning model type known as transformers to build predictive models which encode, and are able to accurately predict, human writing after having been trained on large volumes of written content. With the advent of such very large and models such as GPT-1, GPT-2, and in 2020 the ultra-large-language-model called GPT-3 (all by OpenAI), advances in these architectures began to not only model language on the word level, but successfully model and capture the structure and abstractive capability of human language on a higher level. This novel capability to replicate some of the abstractive capability of human writing enables the use of such models in combination with environment and goal observations to make suggestions which provide the same associative advantages that humans may use when they interact with such environments. The Al Agent Controller may transfer these outputs or suggestions to the Al Agent. Alternatively or in addition, the Al Agent Controller may translate or convert these outputs or suggestions for the Al Agent” [0018] [Examiner notes that the ULLM is mentioned as the core generative model in this context. This fits the idea of a chain of models, as the output of the ULLM is used by the AI Agent Controller, which processes it further before sending it to the AI Agent (also known as neural networks). The models in the chain work together in a sequence. The ULLM generates content, the AI Agent Controller processes it, and the AI Agent uses the output to take action. The text outlines this flow of information and how one model’s output can become the input for another]). The reasons of obviousness have been noted in the rejection of claim 5 above and applicable herein. Regarding claims 13 and 20, a combination of Bhide-Vikram discloses the system of claims 4/18. Bhide-Vikram do not explicitly disclose: revising, by the system, the artificial intelligence usage policy based on a result of the analyzed proposed artificial intelligence use case with the trained artificial intelligence model. However, O’Malia discloses: revising, by the system, the artificial intelligence usage policy based on a result of the analyzed proposed artificial intelligence use case with the trained artificial intelligence model (“In a third Stage, the Priming Module 110 may also incorporate optimization processes that condition, translate, or otherwise transform the language representation outputs produced by the Visual/Natural Language Mapping Module 108 before the language representation outputs are transferred to the ULLM 114. For example, the Discriminator 118 of the Priming Module 110 may block or discard certain types of information. This optional third Stage may use data regarding the performance or effectiveness of previous data exchanges between environment of the Al Agent 112 and the ULLM 114 in order to manipulate the data provided to the ULLM 114 in order to improve the likelihood that the ULLM 114 will generate outputs which improve the performance of the Al Agent 112. An example of the conditioning step may include discarding information which is unlikely to be relevant to the Al Agent's decision-making process, or favoring the delivery of novel or changing information which may be more critical to the Al Agent's short-term action selection. The conditioning step may also include the prioritization of information which matches certain key mental models or abstract concepts which the ULLM 114 is deemed or predicted to process effectively, such as certain slot-filling tasks in which a proven ULLM mental model framework may be used to convert a certain type of information into a robust prediction for Al Agent action selection” [0049]). The reasons of obviousness have been noted in the rejection of claim 6 above and applicable herein. Regarding claim 14, a combination of Bhide-Vikram discloses the system of claim 4. Bhide-Vikram do not explicitly disclose: wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on a characteristic of the consumer. However, O’Malia discloses: wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on a characteristic of the consumer (“The Al Agent Controller may return to the Al Agent, via novel conversion or translation methods, information or guidance, or reward signal(s) regarding elements of the environment, components of the environment, goals, actions, or any combinations thereof which may be relevant or important for any positive or negative reasons. Alternatively or in addition, the Al Agent Controller may return to the Al Agent any other influence or guidance which enables the Al Agent to obtain similar performance benefits that a human may otherwise have had based on the human's use of past knowledge and its generalized application to a given environment/environment state and the goals, objects, relationships, actions and/or other factors which may exist within the environment of the Al Agent” [0022]). The reasons of obviousness have been noted in the rejection of claims 5 and 6 above and applicable herein. Regarding claim 16, a combination of Bhide-Vikram discloses the system of claim 4. Bhide-Vikram do not explicitly disclose: determining, by the system, a characteristic of training data that was used to generate the trained artificial intelligence model, and wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on the characteristic of the training data. However, O’Malia discloses: determining, by the system, a characteristic of training data that was used to generate the trained artificial intelligence model, and wherein the artificial intelligence usage policy comprises a policy that governs use of the trained artificial intelligence model based on the characteristic of the training data (“The Al Agent Controller 102 may provide a novel means for the Al Agent 112 to access past human “experience”, encoded in the model via vast volumes of training data used to shape the weights of the network of the ULLM 114, to leverage thought templates for typical human reasoning or thought patterns, and to combine them with new information and context to provide inference related to human thought models, and a mechanism through which such outputs of the ULLM 114 may influence or direct the actions of Al Agent 112 in an environment” [0055]). The reasons of obviousness have been noted in the rejection of claims 5 and 6 above and applicable herein. Claim 21 recites substantially the same limitation as claim 6, the form of a computer program product for implementing the corresponding method, therefore it is rejected under the same rationale. Regarding claim 22, a combination of Bhide-Vikram discloses the system of claim 8. Bhide does not explicitly disclose: wherein the potential harm to people comprises at least one of spreading disinformation, intentional toxicity, non-consensual use of likeness, or increased carbon emissions However, Vikram discloses: wherein the potential harm to people comprises at least one of spreading disinformation, intentional toxicity, non-consensual use of likeness, or increased carbon emissions In embodiments, as contemplated herein a processor (e.g., via governance system) may generate the one or more rules using a variety of methods. In some embodiments, a processor may utilize an AI rule engine to generate one or more rules. While in some embodiments, the AI rule engine is a subcomponent of the AI engine (e.g., The AI engine if trained to perform AI rule assignment) enforcing the one or more rules of the governance system, in other embodiments, the AI rule engine is a separately trained AI system. In embodiments, the AI rule engine may analyze historical data of the participant network, such as the various types of metadata and information compiled within the blockchain, to generate the one or more rules. In these embodiments, a processor may task the AI rule engine to identify one or more rules that may accomplish one or more system goals. For example, a processor could task the AI rule engine to develop rules that may prevent and/or limit the exposure of nefarious deep fakes throughout the participant network. In embodiments, the AI rule engine may produce one or more rules associated with particular participants, subsets of participants, all of the participants, or any combination thereof. While in some embodiments, a participant who generated or created the media data (e.g., owner of the media data) may be able define one or more rules that should be applied to their particular media data, in other embodiments, the AI rule engine may automatically generate a selection of rules that a participant/owner may select and customize the one or more rules that apply to the media data. For example, the AI rule engine could automatically generate Rule A, Rule B, and Rule C for a particular media data (e.g., image) and the participant/owner could select Rule A and Rule C to apply to their particular media data” [0034] [Examiner notes that nefarious deep fakes are seen as disinformation/non-consensual use of likeness]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARON MATTHEWOS WORKU whose telephone number is (703)756-1761. The examiner can normally be reached Monday - Friday, 9:30am - 6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on 571-270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARON MATTHEWOS WORKU/Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Jan 23, 2025
Non-Final Rejection — §103
Apr 02, 2025
Interview Requested
Apr 10, 2025
Applicant Interview (Telephonic)
Apr 16, 2025
Examiner Interview Summary
Apr 21, 2025
Response Filed
Jul 03, 2025
Final Rejection — §103
Aug 14, 2025
Interview Requested
Aug 21, 2025
Examiner Interview Summary
Aug 21, 2025
Applicant Interview (Telephonic)
Aug 26, 2025
Response after Non-Final Action
Sep 15, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Oct 16, 2025
Non-Final Rejection — §103
Dec 26, 2025
Interview Requested
Jan 13, 2026
Applicant Interview (Telephonic)
Jan 13, 2026
Examiner Interview Summary
Jan 14, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547939
SYSTEM AND A METHOD FOR PERFORMING A PRIVACY-PRESERVING DISTRIBUTION SIMILARITY TESTS BETWEEN A PLURALITY OF DATASETS
2y 5m to grant Granted Feb 10, 2026
Patent 12524579
SRAM PHYSICALLY UNCLONABLE FUNCTION (PUF) MEMORY FOR GENERATING KEYS BASED ON DEVICE OWNER
2y 5m to grant Granted Jan 13, 2026
Patent 12513013
Dynamic Cross-Node Multidimensional Hashchain Network-Based Meta-Content Enabler for Real-Time Content Based Anomaly Detection
2y 5m to grant Granted Dec 30, 2025
Patent 12475240
PROTECTED CONTENT CONTAMINATION PREVENTION
2y 5m to grant Granted Nov 18, 2025
Patent 12470519
INTRA-VLAN TRAFFIC FILTERING IN A DISTRIBUTED WIRELESS NETWORK
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+53.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month