Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,049

PROVIDING AND COMPARING CUSTOMIZED RISK SCORES FOR ARTIFICIAL INTELLIGENCE MODELS

Non-Final OA §101§102§103§Other
Filed
Apr 28, 2023
Examiner
JIANG, HAIMEI
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
51%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
82%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
210 granted / 415 resolved
-4.4% vs TC avg
Strong +32% interview lift
Without
With
+31.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
30 currently pending
Career history
445
Total Applications
across all art units

Statute-Specific Performance

§101
16.4%
-23.6% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§101 §102 §103 §Other
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to the Application filed on 4/28/2023. Claims 1-20 are pending in the case. Claims 1, 11 and 18 are independent claims. Election/Restrictions During a telephone conversation with Timothy Farrell on 2/4/2026 a provisional election was made with traverse to prosecute the invention of claims 1-10 and 18-20. Affirmation of this election must be made by applicant in replying to this Office action. Claims 11-17 are withdrawn from further consideration by the examiner, 37 CFR 1.142(b), as being drawn to a non-elected invention. Claim Rejections - 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-10, 18-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If itis determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult the 2019 PEG for more details of the analysis. Step 1 Analysis: Is the claim to a process, machine, manufacture or composition of matter? See MPEP § 2106.03. Claims 1-10 are drawn to a system, claim 11 drawn to a method (not elected by the application per election) and claims 18-20 are drawn to computer program product, therefore each of these claim groups falls under one of four categories of statutory subject matter (machine/products/apparatus, process/method, manufactures and compositions of mater; Step 1). Nonetheless, the claims are directed to a judicially recognized exception of an abstract idea without significant more (Step 2A, see below): As to claim 1: Claim 1 recites “A system comprising: a memory that stores computer executable components; and a processor that executes computer executable components stored in the memory, wherein the computer executable components comprise: a requirements component that receives risk assessment requirements for an artificial intelligence model; a weight component that determines weights for dimensions and metrics based on the risk assessment requirements; a risk profile generation component that combines the weights for dimensions and metrics into a single set of weights to generate a risk profile comprising weighted dimensions and weighted metrics; and a customized score component that calculates a customized risk assessment score for the artificial intelligence model based on the risk profile and measurements of the artificial intelligence model corresponding to the weighted metrics.“ Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “that determines weights for dimensions and metrics based on the risk assessment requirements” is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Yes, the limitation “combines the weights for dimensions and metrics into a single set of weights to generate a risk profile comprising weighted dimensions and weighted metrics” is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C). Yes, the limitation “calculates a customized risk assessment score for the artificial intelligence model based on the risk profile and measurements of the artificial intelligence model corresponding to the weighted metrics” is the abstract idea of a mathematical relationship, as directed to “a mathematical relationship is a relationship between variables or numbers. A mathematical relationship may be expressed in words or using mathematical symbols”. See MPEP § 2106.04(a)(2)(I)(A). Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d). No, this limitation “an artificial intelligence model” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “artificial intelligence model” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d). No, this limitation “a memory that stores computer executable components; and a processor that executes computer executable components stored in the memory”, ” a weight component”, “a customized score component”, “a risk profile generation component” additional elements that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(f)(2). No, this limitation “a requirements component that receives risk assessment requirements for an artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole. Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness. Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness. No, this limitation “an artificial intelligence model” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “artificial intelligence model” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d). No, this limitation “a memory that stores computer executable components; and a processor that executes computer executable components stored in the memory”, ” a weight component”, “a customized score component”, “a risk profile generation component” are additional elements that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(f)(2). No, this limitation “a requirements component that receives risk assessment requirements for an artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network / performing repetitive calculations / electronic recordkeeping / storing and retrieving information in memory / electronically scanning or extracting data from a physical document, which the courts have recognized as well‐understood, routine, and conventional when they are claimed in a generic manner. See MPEP § 2106.05(d)(II). As to claim 18: Claim 18 recites “A computer program product facilitating the comparison of risk assessments for multiple artificial intelligence models, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: receive a first risk profile for a first artificial intelligence model and a second risk profile for a second artificial intelligence model; calculate a first customized risk assessment score for the first artificial intelligence model based on the first risk profile and measurements associated with the first artificial intelligence model corresponding to metrics of the first risk profile; and generate a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and measurements associated with the second artificial intelligence model corresponding to metrics of the first risk profile.“ Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “calculate a first customized risk assessment score for the first artificial intelligence model based on the first risk profile and measurements associated with the first artificial intelligence model corresponding to metrics of the first risk profile” and “generate a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and measurements associated with the second artificial intelligence model corresponding to metrics of the first risk profile” is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C). Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d). No, this limitation “artificial intelligence model” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “artificial intelligence model” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d). No, this limitation “A computer program product facilitating the comparison of risk assessments for multiple artificial intelligence models, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to” additional elements that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(f)(2). No, this limitation “receive a first risk profile for a first artificial intelligence model and a second risk profile for a second artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole. Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness. Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness. No, this limitation “artificial intelligence model” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “artificial intelligence model” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d). No, this limitation “A computer program product facilitating the comparison of risk assessments for multiple artificial intelligence models, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to..” additional elements that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(f)(2). No, this limitation “receive a first risk profile for a first artificial intelligence model and a second risk profile for a second artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network / performing repetitive calculations / electronic recordkeeping / storing and retrieving information in memory / electronically scanning or extracting data from a physical document, which the courts have recognized as well‐understood, routine, and conventional when they are claimed in a generic manner. See MPEP § 2106.05(d)(II). Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter. Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”. Dependent claim 2 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Incorporates the abstract idea of independent claim. Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 3 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “determination component that determines whether the first risk profile can be applied to measurements of the second artificial intelligence model” is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No, this limitation “receives the risk profile and a second risk profile associated with a second artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No, this limitation “receives the risk profile and a second risk profile associated with a second artificial intelligence model” amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception to perform “receive”. Therefore, the additional limitation is insignificant extra-solution activity to the judicial exception, and as such is deemed insufficient to transform the judicial exception to a patentable invention. See MPEP §§ 2106.04(d), 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network / performing repetitive calculations / electronic recordkeeping / storing and retrieving information in memory / electronically scanning or extracting data from a physical document, which the courts have recognized as well‐understood, routine, and conventional when they are claimed in a generic manner. See MPEP § 2106.05(d)(II). Dependent claim 4 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “calculates a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and the measurements of the second artificial intelligence model” is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 5 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, Incorporates the abstract idea of independent claim Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 6 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “calculates a first converted risk assessment score for the first artificial intelligence model based on the second risk profile and the measurements of the first artificial intelligence model” is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 7 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “determines whether the first risk profile and the second risk profile comprise intersecting metrics” is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 8 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “calculates a first converted risk assessment score based on a configuration of the intersecting metrics and the measurements of the first artificial intelligence model and a second converted risk assessment score based on the configuration of the intersecting metrics and the measurements of the second artificial intelligence model” is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 9 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, Incorporates the abstract idea of independent claim Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 10 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, Incorporates the abstract idea of independent claim Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 19 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, the limitation “compare the first customized risk assessment score and the second converted risk assessment score” is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. Dependent claim 20 Incorporates the rejection of independent claim Step 2A Prong 1: does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1). Yes, Incorporates the abstract idea of independent claim Step 2A prong 2: the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d) No. Step 2B: the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05. and Is the additional element recognized as well-understood, routine, and conventional? No. The dependent claims as analyzed above, do not recite limitations that integrated the judicial exception into a practical application. In addition, the claim limitations do not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). Therefore, the claims do not recite any limitations, when considered individually or as a whole, that recite what the courts have identified as “significantly more”, see MPEP 2106.05; and therefore, as a whole the claims are not patent eligible. As shown above, the dependent claims do not provide any additional elements that when considered individually or as an ordered combination, amount to significantly more than the abstract idea identified. Therefore, as a whole the dependent claims do not recite what the courts have identified as “significantly more” than the recited judicial exception. Therefore, claims 1-10 and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception and does not recite, when claim elements are examined individually and as a whole, elements that the courts have identified as “significantly more” than the recited judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “ComplAI: Theory of A Unified Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models”, De et al, 12/30/2022. Referring to claim 1, De discloses a system comprising: a memory that stores computer executable components; and a processor that executes computer executable components stored in the memory, wherein the computer executable components comprise: a requirements component that receives risk assessment requirements for an artificial intelligence model; (page 2 of De, “ComplAI can analyze explain ability, robustness, performance, and fairness of a given model and scores the model on these aspects.”) a weight component that determines weights for dimensions and metrics based on the risk assessment requirements; (page 7 of De, “The weights selection (W) for each of the model assessment factors varies with use case for example: a disease prediction model needs to be more explainable and fair and in contrast a churn pre diction model can be reasonably explainable but highly performant. The weight selections of the model assessment factors for a model for a use-case entirely depends on the importance of these factors of the use-case and usually set by the subject matter (domain) experts of the organization”) a risk profile generation component that combines the weights for dimensions and metrics into a single set of weights to generate a risk profile comprising weighted dimensions and weighted metrics; (page 7-8 of De, “The weights selection (W) for each of the model assessment factors varies with use case for example: a disease prediction model needs to be more explainable and fair and in contrast a churn pre diction model can be reasonably explainable but highly performant. The weight selections of the model assessment factors for a model for a use-case entirely depends on the importance of these factors of the use-case and usually set by the subject matter (domain) experts of the organization… As the above datasets (except Lung Cancer Detection) do not have any protected attributes, we use different datasets to assess fairness (discussed in Section 6). For regression problems, we use adjusted 𝑅2, and for classification problems, we use accuracy, F1 score etc. as performance metrics (varies with use-case). Ideally, users can use multiple different performance scores (accuracy, F1 score, recall, precision) and finally combine these into a final weighted average model performance score (discussed in Section 3.9). Table 3 shows the experimental results.”) and a customized score component that calculates a customized risk assessment score for the artificial intelligence model based on the risk profile and measurements of the artificial intelligence model corresponding to the weighted metrics. (page 7 of De, “we have described how the framework measures different aspects of the model. These scores are converted to percentages, and higher scores indicate better performance. The framework allows the users to select the set of scores to be considered and the importance weights of those scores to develop an aggregate score. This aggregate score, referred to as the AI Trust factor (AI_Score), quantifies the model’s overall performance on multiple aspects of a responsible machine learning algorithm, i.e., explainability, robustness, fairness, drift sustainability, and performance. It can be used to compare different models developed for the same problem definition.”) Referring to claim 2, De discloses the system of claim 1, wherein the requirements component receives the risk assessment requirements from a plurality of sources. (page 1 of De, where the data is from different sources such as WHO, NITI, etc..) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-10 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over “ComplAI: Theory of A Unified Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models”, De et al, 12/30/2022 in view of Goodsitt et al (US 20240211331 A1) Referring to claim 3. The system of claim 2, wherein the computer executable components further comprise: a receiving component that receives the risk profile and a second risk profile associated with a second artificial intelligence model, wherein the risk profile is a first risk profile (page 9 of De, Table 3: results of different models with associated metrics/profile) De does not specifically disclose “a determination component that determines whether the first risk profile can be applied to measurements of the second artificial intelligence model.” However, Goodsitt discloses a determination component that determines whether the first risk profile can be applied to measurements of the second artificial intelligence model ([0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) De and Goodsitt are analogous art because both references concern running dataset into fitted models. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify De’s assess dataset within a weighted model with finding a fitting model for the inputted dataset as taught by Goodsitt. The motivation for doing so would have been to dynamically re-normalize categorization of data with respect to the models in use. ([0001] of Goodsitt). Referring to claim 4, De in view of Goodsitt disclose the system of claim 3, wherein the computer executable components further comprise: a score conversion component that, in response to a determination that the first risk profile can be applied to the measurements of the second artificial intelligence model, calculates a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and the measurements of the second artificial intelligence model. ([0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) Referring to claim 5. The system of claim 3, wherein in response to a determination that the first risk profile cannot be applied to the measurements of the second artificial intelligence model, the determination component determines whether the second risk profile can be applied to the measurements of the artificial intelligence model, wherein the artificial intelligence model is a first artificial intelligence model. ([0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) Referring to claim 6, De in view of Goodsitt disclose the system of claim 5, wherein the computer executable components further comprise: a score conversion component that, in response to a determination that the second risk profile can be applied to the measurements of the first artificial intelligence model, calculates a first converted risk assessment score for the first artificial intelligence model based on the second risk profile and the measurements of the first artificial intelligence model. (page 7-8 of De, “The weights selection (W) for each of the model assessment factors varies with use case for example: a disease prediction model needs to be more explainable and fair and in contrast a churn pre diction model can be reasonably explainable but highly performant. The weight selections of the model assessment factors for a model for a use-case entirely depends on the importance of these factors of the use-case and usually set by the subject matter (domain) experts of the organization… As the above datasets (except Lung Cancer Detection) do not have any protected attributes, we use different datasets to assess fairness (discussed in Section 6). For regression problems, we use adjusted 𝑅2, and for classification problems, we use accuracy, F1 score etc. as performance metrics (varies with use-case). Ideally, users can use multiple different performance scores (accuracy, F1 score, recall, precision) and finally combine these into a final weighted average model performance score (discussed in Section 3.9). Table 3 shows the experimental results.” Further, [0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) Referring to claim 7, De in view of Goodsitt disclose the system of claim 5, wherein in response to a determination that the second risk profile cannot be applied to the measurements of the first artificial intelligence model, the determination component determines whether the first risk profile and the second risk profile comprise intersecting metrics. ([0028]-[0029] of Goodsitt) Referring to claim 8, De in view of Goodsitt disclose the system of claim 7, wherein the computer executable components further comprise: a score conversion component that, in response to a determination that the first risk profile and the second risk profile comprise intersecting metrics, calculates a first converted risk assessment score based on a configuration of the intersecting metrics and the measurements of the first artificial intelligence model and a second converted risk assessment score based on the configuration of the intersecting metrics and the measurements of the second artificial intelligence model. (page 7-8 of De, “The weights selection (W) for each of the model assessment factors varies with use case for example: a disease prediction model needs to be more explainable and fair and in contrast a churn pre diction model can be reasonably explainable but highly performant. The weight selections of the model assessment factors for a model for a use-case entirely depends on the importance of these factors of the use-case and usually set by the subject matter (domain) experts of the organization… As the above datasets (except Lung Cancer Detection) do not have any protected attributes, we use different datasets to assess fairness (discussed in Section 6). For regression problems, we use adjusted 𝑅2, and for classification problems, we use accuracy, F1 score etc. as performance metrics (varies with use-case). Ideally, users can use multiple different performance scores (accuracy, F1 score, recall, precision) and finally combine these into a final weighted average model performance score (discussed in Section 3.9). Table 3 shows the experimental results.” Further, [0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) Referring to claim 9, De in view of Goodsitt disclose the system of claim 1, further comprising: a risk profile tracking component that tracks changes to the first risk profile over time. ([0024] of Goodsitt, “By detecting drift in model selection rules, model selector system 102 may signal to administrators, for example, that the nature and behavior of data modelling is changing over time, which provides useful information for improving model efficacy and performance.”) Referring to claim 10, De in view of Goodsitt disclose the system of claim 2, further comprising: a score tracking component that tracks changes to the customized risk assessment score over time. ([0024] of Goodsitt, “By detecting drift in model selection rules, model selector system 102 may signal to administrators, for example, that the nature and behavior of data modelling is changing over time, which provides useful information for improving model efficacy and performance.”) Referring to claim 18, De discloses a computer program product facilitating the comparison of risk assessments for multiple artificial intelligence models, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: receive a first risk profile for a first artificial intelligence model and a second risk profile for a second artificial intelligence model; (page 9 of De, Table 3: results of different models with associated metrics/profile) calculate a first customized risk assessment score for the first artificial intelligence model based on the first risk profile and measurements associated with the first artificial intelligence model corresponding to metrics of the first risk profile; (page 7 of De, “we have described how the framework measures different aspects of the model. These scores are converted to percentages, and higher scores indicate better performance. The framework allows the users to select the set of scores to be considered and the importance weights of those scores to develop an aggregate score. This aggregate score, referred to as the AI Trust factor (AI_Score), quantifies the model’s overall performance on multiple aspects of a responsible machine learning algorithm, i.e., explainability, robustness, fairness, drift sustainability, and performance. It can be used to compare different models developed for the same problem definition.”). Even though De discloses generate a risk assessment score for a ML mode (see citations above), but De does not specifically disclose “generate a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and measurements associated with the second artificial intelligence model corresponding to metrics of the first risk profile.” However, De in view of Goodsitt discloses generate a second converted risk assessment score for the second artificial intelligence model based on the first risk profile and measurements associated with the second artificial intelligence model corresponding to metrics of the first risk profile ([0009] of Goodsitt, “the system may process the input dataset using the chosen model and determine that the first model's performance is not above the threshold with respect to the chosen model. In response to determining that the performance of the first model when applied to the input dataset is not above the threshold, the system may process the input dataset using the second model” here, when an input dataset is inputted into a first model, when that first model is determined to not been able to process the inputted dataset as desired, then the system will use the input dataset into the second model and vis versa. The claimed “profile” are interpreted as Goodsitt’s input dataset under BRI because the claimed “profile” is a metric profile hence datasets) De and Goodsitt are analogous art because both references concern running dataset into fitted models. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify De’s assess dataset within a weighted model with finding a fitting model for the inputted dataset as taught by Goodsitt. The motivation for doing so would have been to dynamically re-normalize categorization of data with respect to the models in use. ([0001] of Goodsitt). Referring to claim 19, De in view of Goodsitt disclose the computer program product of claim 18, wherein the program instructions are further executable by the processor to cause the processor to: compare the first customized risk assessment score and the second converted risk assessment score. (page 7 of De, “quantifies the model’s overall performance on multiple aspects of a responsible machine learning algorithm, i.e., explainability, robustness, fairness, drift sustainability, and performance. It can be used to compare different models developed for the same problem definition.”) Referring to claim 20, De in view of Goodsitt disclose the computer program product of claim 18, wherein the program instructions are further executable by the processor to cause the processor to: generate the first customized risk profile and the second customized risk profile based on risk assessment requirements of two or more sources. (page 1 of De, where the data is from different sources such as WHO, NITI, etc..) The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: “Counterfactural Fairness”, Kusner et al, 3/8/2018: Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://;www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e- mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIMEI JIANG whose telephone number is (571)270-1590. The examiner can normally be reached M-F 9-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at 571-270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587552
TIME SERIES ANOMALY DETECTION METHOD USING GRU-BASED MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12579209
Devices, Methods, and Graphical User Interfaces for Interacting with a Web-Browser
2y 5m to grant Granted Mar 17, 2026
Patent 12541991
AUTOMATICALLY CLASSIFYING HETEROGENOUS DOCUMENTS USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Feb 03, 2026
Patent 12511563
QUANTUM COMPUTING TASK PROCESSING METHOD AND SYSTEM AND COMPUTER DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12468880
METHODS AND SYSTEMS FOR PRESENTING DROP-DOWN, POP-UP OR OTHER PRESENTATION OF A MULTI-VALUE DATA SET IN A SPREADSHEET CELL
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
51%
Grant Probability
82%
With Interview (+31.9%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month