Prosecution Insights
Last updated: April 19, 2026
Application No. 18/612,603

MECHANISM FOR AUTOMATED DETERMINATION AND EXCHANGE OF TRUST CREDENTIALS FOR COMPUTATIONAL DECISION SYSTEMS

Final Rejection §101§103§112
Filed
Mar 21, 2024
Examiner
BOYCE, ANDRE D
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Databricks Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
4y 7m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
224 granted / 620 resolved
-15.9% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
661
Total Applications
across all art units

Statute-Specific Performance

§101
33.6%
-6.4% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 620 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Response to Amendment This Final Office action is in response to Applicant’s amendment filed 10/14/2025. Claims 1, 8, 10, 16 and 18 have been amended. Claims 1-20 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant's arguments filed 10/14/2025 have been fully considered but they are not persuasive. Additionally, Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. The previously pending objections to claims 8 and 16 have been withdrawn. Specification The disclosure is objected to because of the following informalities: Paragraph 0076 of the specification recites “The disclosed configuration beneficially enables enterprises consume AI-driven software by removing the need to separately assess risk profiles of such products, and allow for leverage of AI-drive software within key business critical functions of the enterprise. a credential scoring mechanism into a cloud environment artificial intelligence modeling system to enable a trust credential. The trust credential may be used to provide a guardrail to help develop secure/risk-averse models by enabling insights into which parts of their build was responsible for a lower score” (emphasis added). The first sentence of the paragraph seems to be grammatically incorrect, and the second sentence fails to capitalize the first letter. Appropriate correction is required. Claim Objections Claim 1 is objected to because of the following informalities: The claim recites “or a use of a model for the machine-learning application”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Amended independent claims 1, 10 and 18 recite “deploy[ing] an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application.” Paragraph 0023 of the specification merely recites that “The model serving system 170 deploys one or more machine-learning models. The machine-learning models may include regression models, classification models, clustering models, neural networks, reinforcement learning models, or any suitable combination thereof.” As a result, the specification does not seem to support “deploy[ing] an artificial intelligence (AI)-driven application on one or more cluster computing systems.” Clarification is required. Dependent claims 2-9, 11-17, 19 and 20 are rejected based upon the same rationale. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to an abstract idea without significantly more. Here, under step 1 of the Alice analysis, method claims 1-9 are directed to a series of steps, computer-readable medium claims 10-17 are directed to stored instructions, and system claims 18-20 are directed to a processor system, and a memory system comprising stored instructions. Thus the claims are directed to a process, manufacture, and machine, respectively. Under step 2A Prong One of the analysis, the claimed invention is directed to an abstract idea without significantly more. The claims recite generating a trust credential, including deploying, identifying, receiving, applying, and generating steps. The limitations of deploying, identifying, receiving, applying, and generating, are a process that, under its broadest reasonable interpretation, covers organizing human activity concepts, but for the recitation of generic computer components. Specifically, the claim elements recite identifying one or more risk factors; receiving a request to generate a trust credential for a machine-learning application; receiving the machine-learning application and associated data, wherein the machine-learning application has one or more subcomponents; applying a risk determination function to each of the one or more subcomponents of the machine-learning application and the associated data to generate a risk score for each of the one or more subcomponents, wherein the risk determination function evaluates the one or more subcomponents of the machine-learning application with respect to the one or more risk factors, wherein the risk score for each of the one or more subcomponents indicate one or a combination of a quality of a training dataset that is used to train the machine-learning application, or a use of a model for the machine-learning application; applying a weighting function to the risk score of each subcomponent to generate a trust score for each of the one or more subcomponents, wherein the weighting function applies a weight to a risk score based on a source of a subcomponent of the machine-learning application; generating the trust credential for the machine-learning application based on the trust scores of each of the one or more subcomponents; and display the trust credential to a user of the request. That is, other than reciting deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions, in claims 10-20 only, the claim limitations merely cover commercial interactions, including business relations, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Under Step 2A Prong Two, the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This judicial exception is not integrated into a practical application. The claims include deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions. The deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, user interface (UI) element, processor system and memory system comprising stored instructions in the steps is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As a result, the claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. None of the dependent claims recite additional limitations that are sufficient to amount to significantly more than the abstract idea. Claims 2-4 further describe identifying the one or more risk factors, applying the risk determination function to a subcomponent, and a subcomponent. Claims 5-8 further describe the one or more risk factors, the weighting function, generating a trust credential, and generating the trust score. Claim 9 recites an additional applying step. Similarly, dependent claims 11-17, 19 and 20 recite additional details that further restrict/define the abstract idea. A more detailed abstract idea remains an abstract idea. Under step 2B of the analysis, the claims include, inter alia, deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. There isn’t any improvement to another technology or technical field, or the functioning of the computer itself. Moreover, individually, there are not any meaningful limitations beyond generally linking the abstract idea to a particular technological environment, i.e., implementation via a computer system. Further, taken as a combination, the limitations add nothing more than what is present when the limitations are considered individually. There is no indication that the combination provides any effect regarding the functioning of the computer or any improvement to another technology. In addition, as discussed in paragraph 0072 of the specification, “The example computer system 700 includes a processing system including one or more processing units (generally processor 702). The processor 702 is, for example, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a controller, a state machine, one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any combination of these. The processor executes an operating system for the computing system 700. The computer system 700 also includes a main memory 704. In some embodiments, the main memory 704 is a memory system including one or more memories. The computer system may include a storage unit 716. The processor 702, memory 704, and the storage unit 716 communicate via a bus 708.” As such, this disclosure supports the finding that no more than a general purpose computer, performing generic computer functions, is required by the claims. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank Int’l et al., No. 13-298 (U.S. June 19, 2014). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shivanna et al (US 20220129561 A1), in view of Grajek (US 20240195819 A1). As per claim 1, Shivanna et al disclose a method, comprising: identifying one or more risk factors (i.e., the risk assessment engine 134 determines overall risk scores for the corresponding software components on the list based on intrinsic features (e.g., attributes) of the software components, ¶ 0022); receiving a request to generate a trust credential for the machine-learning application (i.e., the risk assessment engine 134 determines the overall risk score for a given software component based on a trust score that represents a determined degree of trust, ¶ 0022); receiving the machine-learning application and associated data, wherein the machine-learning application has one or more subcomponents (i.e., the risk assessment engine 134, in accordance with example implementations, receives an input 204 that describes a profile of a software product and a list of candidate open source software components to be included in the software product, ¶ 0029); applying a risk determination function to each of the one or more subcomponents of the machine-learning application and the associated data to generate a risk score for each of the one or more subcomponents, wherein the risk determination function evaluates the one or more subcomponents of the machine-learning application with respect to the one or more risk factors (i.e., the risk assessment engine 134 determines a trust score for the open software component based on a number of parameters and further determines a security level score for the open source software component based on a number of parameters; and the risk assessment engine determines an overall risk score for the open source software component, ¶ 0031), wherein the risk score for each of the one or more subcomponents indicate one or a combination of a quality of a training dataset that is used to train the machine-learning application, or a user of a model for the machine-learning application (i.e., The machine learning classifier 142 determines the overall risk score based on the values of parameters that represent trust and security related features of the open source software component. In accordance with some implementations, the machine learning classifier 142 may replace any missing values (e.g., values pertaining to feature parameters that are unavailable for a particular open source software component) with median, mode and/or mean values during a data cleansing operation. The machine learning classifier 142 may be trained (i.e., its model may be trained) using, for example, trust and security level feature parameters associated with relatively well-known open source software components with the overall risk score being assigned by a subject matter expert, ¶ 0033); applying a weighting function to the risk score of each subcomponent to generate a trust score for each of the one or more subcomponents, wherein the weighting function applies a weight to a risk score based on a source of a subcomponent of the machine-learning application (i.e., determining the overall risk score includes assigning a first plurality of weights to components of the trust to provide a first plurality of values; assigning a second plurality of weights to components of the security level to provide a second plurality of values; and determining the overall risk score, ¶ 0065); generating the trust credential for the machine-learning application based on the trust scores of each of the one or more subcomponents in real time (i.e., The process 600 includes determining (block 612) a security context of the software product. Based on the security level, the trust and the security context, the process 600 includes providing (block 616) a recommendation for the given software component, ¶ 0058, wherein the knowledge database 214 may be a dynamically populated database using a combination of automated threat and security collector components 220, such as security collectors (e.g., open source software (OSS) crawlers 221). As depicted in FIG. 2, this may involve the automated crawling of open source community repositories and portals 264, such as certain communities/contributors 266 ¶ 0039); and generating a user interface (UI) element to display the trust credential to a user of the request (i.e., the overall risk scores and output of the expert system 250 with the corresponding recommendations may be communicated back to the user who submitted the input 204. In accordance with example implementations, this output may contain a risk assessment score for each open source software component (e.g., a score of “high, “medium,” or “low”), ¶ 0043). Shivanna et al does not disclose deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application. Grajek discloses FIG. 1 is a diagrammatic representation of a cloud internet environment 100 in which some example embodiments of the present disclosure may be implemented or deployed. One or more application servers 104 provide server-side functionality via an internet/cloud-network 102 to a networked user device, in the form of a client device 106. A user 128 (e.g., an administrator, a risk manager, a reviewer) operates the client device 106. The client device 106 includes a web client 110, a programmatic client 108 that is hosted and executed on the client device 106 (¶ 0041). An Application Program Interface (API) server 118 and a web server 120 provide respective programmatic and web interfaces to application servers 104. A specific application server 116 hosts an Identity Trust Scoring System 122. The Identity Trust Scoring System 122 includes components, modules and/or applications (¶ 0042). The Identity Trust Scoring System 122 retrieves metadata from remote IAM systems (e.g., cloud-based IAM server system 112 and on-premise IAM server system 130) and generates a scoring based on models (¶ 0043). In one example embodiment, the Identity Trust Scoring System 122 trains several machine learning models based on features of the aggregated metadata from cloud-based IAM server system 112 and/or on-premise IAM server system 130 (¶ 0044). Shivanna et al and Grajek are concerned with effective risk management. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application in Shivanna et al, as seen in Grajek, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 2, Shivanna et al disclose leveraging a prescriptive analytics model to determine whether a risk factor may be a foundational risk (i.e., the risk assessment engine 134 may use a particular machine learning classifier 142 to determine an overall risk score (e.g., a particular risk score level, such as “high,” “medium,” or “low”), ¶ 0032). As per claim 3, Shivanna et al disclose determining a risk factor value associated with each of the one or more risk factors; determining a weight for each of the one or more risk factors; applying a corresponding weight to each of the risk factor values to generate weighted risk factor values; and computing the risk score of the subcomponent based on the weighted risk factor values (i.e., determining the overall risk score includes assigning a first plurality of weights to components of the trust to provide a first plurality of values; assigning a second plurality of weights to components of the security level to provide a second plurality of values; and determining the overall risk score based on the first plurality of weighted values and the second plurality of weighted values, ¶ 0065). As per claim 4, Shivanna et al disclose a subcomponent is a software module configured to perform a specific function (i.e., the risk assessment engine 134 determines overall risk scores for the corresponding software components on the list based on intrinsic features (e.g., attributes) of the software components, ¶ 0022). As per claim 5, Shivanna et al disclose each of the one or more risk factors are assigned weights based on an adaptive combination of numerical context evaluation, probabilistic rating value, and deterministic impact rating based on prior occurrences (i.e., the risk assessment engine 134 determines the overall risk score for the open source software component as an inverse function of the security level and trust scores. For example, the overall risk score may be higher (e.g., an overall risk score of “90” when a numeric range of “0 to 100” is used or “high” for the qualitative levels of “low,” “medium” and “high”), ¶ 0031). As per claim 6, Shivanna et al disclose the weighting function applies a smaller weight to a risk score corresponding to a subcomponent from a non-certified source, and applies a greater weight to a risk score corresponding to a subcomponent from a certified source (i.e., assigning a first plurality of weights to components of the trust to provide a first plurality of values; assigning a second plurality of weights to components of the security level to provide a second plurality of values; and determining the overall risk score based on the first plurality of weighted values and the second plurality of weighted values. A particular advantage is that more importance may be attributed to more important or relevant trust components and security level components, ¶ 0065). As per claim 7, Shivanna et al disclose validating risks relevant to model-type; evaluating applicability of risk factor; determining model susceptibility to each applicable risk; and aggregating residual risk impact values (i.e., the overall risk score for a given software component based on a trust score that represents a determined degree of trust (e.g., a trust score derived from the evaluation of trust parameters, such as parameters that represent whether the source code is signed, whether there is support for a secure download connection, whether a community associated with the component is obsolete, and so forth) and a security score that represents a determined degree of security risk (e.g., a security level score derived from the evaluation of security level parameters, ¶ 0022). As per claim 8, Shivanna et al disclose collating the trust credential a standardized framework-based scoring to create a finalized adaptive trust score (i.e., the overall risk score for the open source software component as an inverse function of the security level and trust scores. For example, the overall risk score may be higher (e.g., an overall risk score of “90” when a numeric range of “0 to 100” is used or “high” for the qualitative levels of “low,” “medium” and “high”) when the security level score is relatively low (e.g., security level score of “30” when a numeric range of “0 to 100” is used or “low” for the qualitative levels of “low,” “medium” and “high”) and the trust score (e.g., a trust score of “40” when a numeric range of “0 to 100” is used or “low” for the qualitative levels of “low,” “medium” and “high”), ¶ 0031). As per claim 9, Shivanna et al disclose applying a conversion function to the trust credential of the machine-learning application to generate a standardized trust credential (i.e., the overall risk score for the open source software component as an inverse function of the security level and trust scores. For example, the overall risk score may be higher (e.g., an overall risk score of “90” when a numeric range of “0 to 100” is used or “high” for the qualitative levels of “low,” “medium” and “high”) when the security level score is relatively low (e.g., security level score of “30” when a numeric range of “0 to 100” is used or “low” for the qualitative levels of “low,” “medium” and “high”) and the trust score (e.g., a trust score of “40” when a numeric range of “0 to 100” is used or “low” for the qualitative levels of “low,” “medium” and “high”), ¶ 0031). Claims 10-17 are rejected based upon the same rationale as the rejection of claims 1-3 and 5-9, respectively, since they are the computer-readable medium claims corresponding to the method claims. Claims 18-20 are rejected based upon the same rationale as the rejection of claims 1-3, respectively, since they are the system claims corresponding to the method claims. Response to Arguments In the Remarks, Applicant argues Under Prong One of Step 2A of the subject matter eligibility analysis, the claims are necessarily rooted in computer technology, as the claims specify an AI-driven application deployed on one or more cluster computing systems and deploying a machine-learning application, and generating the trust credential for the machine-learning application in real time. The claims also recite generating a UI element to display the trust credential to a user of the request. Therefore, since the claims are rooted in computer technology, the claims do not recite a judicial exception, including methods of organizing human activity. Under Prong Two of Step 2A of the subject matter eligibility analysis, the claims integrate any judicial exception into a practical application. Specifically, the claims generate a way to assess the risk profile of an AI-driven application and generating a trust credential for the application. See specification, [0013]-[0014]. "This allows for the generation of a trust credential of an AI-driven application in real time, allowing a developer to build a secure and risk-averse application and allowing interested . .. entities to assess the risk profile of an application." Id. This is a practical application in the technical field of AI-driven applications. The Examiner respectfully disagrees. As an initial point, the Examiner notes that method claims 1-9 fail to recite any computing components implementing the method steps, as described in computer-readable medium and system claims 10-20. As described in paragraph 0002 of the specification, “However, widespread adoption of AI and ML applications is impeded by several challenges, including concerns regarding AI-specific risks, such as ethical issues, bias, transparency, morality, self-awareness, and more. However, a greater hurdle that AI and ML applications face is establishing trust. Thus, a method for assessing potential risks of an AI-driven application would be greatly advantageous for enterprise customers and developers.” Additionally, paragraph 0021 recites “For example, enterprise customers looking to integrate AI-driven Software as a Service (SaaS) applications into their business processes require a means for evaluating a risk profile of the application without having any visibility or understanding of the quality of the machine-learned models or algorithms used to develop the application. Furthermore, since AI-driven applications can constantly evolve from retraining an underlying model or modifying an underlying logic, it is essential to reevaluate the trustworthiness of the application.” Moreover, paragraph 0076 recites “The disclosed configuration beneficially enables enterprises consume AI-driven software by removing the need to separately assess risk profiles of such products, and allow for leverage of AI-drive software within key business critical functions of the enterprise. a credential scoring mechanism into a cloud environment artificial intelligence modeling system to enable a trust credential. The trust credential may be used to provide a guardrail to help develop secure/risk-averse models by enabling insights into which parts of their build was responsible for a lower score.” As such, and contrary to Applicant’s assertion, the claim limitations indeed cover commercial interactions, including business relations, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Under Step 2A Prong Two, the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III(A)(2), 84 Fed. Reg. at 54-55. Besides the abstract idea, the claims include deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions. The deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions in the steps is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a generic computer component. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of these computer components does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Even when viewed in combination, the additional elements in the claims do no more than use computer components as a tool (i.e., deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application, a user interface (UI) element, a processor system and a memory system comprising stored instructions). There is no change to the computers and/or other technology recited in the claims, thus the claims do not improve computer functionality or other technology. See, e.g., Trading Technologies Int’l v. IBG, Inc., 921 F.3d 1084, 1093 (Fed. Cir. 2019) (using a computer to provide a trader with more information to facilitate market trades improved the business process of market trading, but not the computer) and the cases discussed in MPEP 2106.05(a)(I), particularly FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095 (Fed. Cir. 2016) (accelerating a process of analyzing audit log data is not an improvement when the increased speed comes solely from the capabilities of a general-purpose computer) and Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055 (Fed. Cir. 2017) (using a generic computer to automate a process of applying to finance a purchase is not an improvement to the computer’s functionality). Accordingly, the claim as a whole does not integrate the recited judicial exception into a practical application and the claim is directed to the judicial exception. Applicant also argues that Shivanna fails to disclose or suggest the feature of "deploying an artificial intelligence (AI)-driven application on one or more cluster computing systems, wherein the AI-driven application deploys a machine-learning application...applying a risk determination function to each of the one or more subcomponents of the machine-learning application and the associated data to generate a risk-score for each of the one or more subcomponents…wherein the risk score for each of the one or more subcomponents indicate one or a combination of a quality of a training dataset that is used to train the machine-learning application, or a use of a model for the machine-learning application," as recited in claim 1. As discussed in the updated rejection, Shivanna et al in view of Grajek indeed disclose Applicant’s amended claim language. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE D BOYCE whose telephone number is (571)272-6726. The examiner can normally be reached M-F 10a-6:30p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao (Rob) Wu can be reached at (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDRE D BOYCE/Primary Examiner, Art Unit 3623 January 22, 2026
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Jul 12, 2025
Non-Final Rejection — §101, §103, §112
Oct 14, 2025
Response Filed
Jan 23, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524722
ISSUE TRACKING METHODS FOR QUEUE MANAGEMENT
2y 5m to grant Granted Jan 13, 2026
Patent 12488363
TREND PREDICTION
2y 5m to grant Granted Dec 02, 2025
Patent 12475421
METHODS AND INTERNET OF THINGS SYSTEMS FOR PROCESSING WORK ORDERS OF GAS PLATFORMS BASED ON SMART GAS OPERATION
2y 5m to grant Granted Nov 18, 2025
Patent 12423719
TREND PREDICTION
2y 5m to grant Granted Sep 23, 2025
Patent 12423637
SYSTEMS AND METHODS FOR PROVIDING DIAGNOSTICS FOR A SUPPLY CHAIN
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
56%
With Interview (+19.8%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 620 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month