Prosecution Insights
Last updated: April 17, 2026
Application No. 18/822,058

METHODS AND SYSTEMS FOR RESOURCE CONTROL USING MACHINE LEARNING ANALYSIS OF RESOURCE OUTPUT

Non-Final OA §101§102§103§112
Filed
Aug 30, 2024
Examiner
FARAMARZI, GITA
Art Unit
2496
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
75%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
40 granted / 75 resolved
-4.7% vs TC avg
Strong +22% interview lift
Without
With
+21.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
33 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 75 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The following is a Non-Final Office Action in response to applicant’s remarks filed on 08/30/2024. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statements (IDS) submitted on February 17, 2025 and January 07, 2026. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claims 5 and 15 are objected to because of the following informalities: claim 5 recites “the another descriptive statement”. The phrase “the another” is grammatically improper and introduces ambiguity as to the antecedent basis of the referenced element. Appropriate correction is required. The same reasons apply to dependent claim 15. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because the abstract further relies heavily on parenthetical and circular (where the resource output is an output produced by a computing resource) and (where the representation is generated by the machine learning system…), which do not add technical substance and obscure the actual operation of the disclosed methods and systems. Correction is required. See MPEP § 608.01(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method for auctioning goods or services which is considered a judicial exception because it falls under Certain Methods of Organizing Human Activity such as commercial or legal interactions including sales activities. This judicial exception is not integrated into a practical application as discussed below and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception as discussed below. This part of the eligibility analysis evaluates whether the claim falls within any statutory category. MPEP 2106.03. In claim(s) 6 the claims recite at least one step or act, including creating, deriving, and constructing. Thus, the claim is to a process, which is one of the statutory categories of invention. Analysis Step 1 (Statutory Categories) — 2019 PEG pq. 53 Claims 1-12 are directed to the statutory categories of invention. Step 2A, Prong 1 (Do the claims recite an abstract idea?) — 2019 PEG pq. 54 Claim 1 recites the following types of subject matter that are judicial exceptions: Abstract idea — mental processes and data manipulation/analysis: “acquiring a resource output, generating a representation of the resource output, and in response to an analysis of the representation against a representational statement, performing an operation, wherein the representational statement is in a representational language”. These limitations recite generating a representation, and performing an operation — concepts that fall within the judicial exceptions of “mental processes” (see PEG Step 2A, Examples and categories of abstract ideas). These steps constitute data analysis, evaluation, and decision-making, which fall within the mental process and data analysis grouping of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 1 recites a judicial exception. Step 2A, Prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?) - 2019 PEG pq. 54 Although claim1 recites a “computer implemented method” and a “machine learning”, a computing resource, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea (i.e. computing resource for the “acquiring” as a form of data gathering,). In particular, claim 1 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) - 2019 PEG pq. 56 The additional elements recited in claim 1, such as: a computer system, a machine learning system, and generic computer implementation are routine and conventional component performing their ordinary functions. Therefore, these elements do not ad an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 1 is not patent eligible. Dependent claims 2-12 are rejected by virtue of dependency to independent claim 1. Dependent claims — analysis and reasons for rejection: Claim 2 recites “producing a determination, wherein the computing resource is a software application, the resource output is an application output of the software application, the representation of the resource output is a first representational statement in the representational language, the representational statement is a second representational statement in the representational language, the first representational statement is generated by the machine learning system, the machine learning system generates the first representational statement based, at least in part, on the resource output, the producing the determination comprises the performing the analysis of the first representational statement against a second representational statement, and producing the determination based, at least in part, on the analysis, and the operation is performed in response to the determination”. Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea identified therein. Accordingly, “obtaining application output, generating a representation, comparing a first representational statement producing a determination, and performing an operation” necessarily represents the result of an evaluation, such as: collecting, analyzing, comparing and making decision based on comparison, which constitutes data analysis and decision making, a category of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 2 recites a judicial exception. Step 2A, Prong 2: Although claim 2 recites a “computing resource” and a “machine learning”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 2 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 2, such as: a software application, a machine learning system, a representational statement, and generic computer implementation are routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 2 is not patent eligible. Claim 3 recites “wherein the operation affects one or more functionalities provided by the software application, or the operation of the software application”. Step 2A, Prong 1: This claim depends from claim 2 and incorporates the abstract idea identified therein. Accordingly, “operation affects one or more functionalities provided by the software application, and operation of the software application” necessarily represents the result of an evaluation, such as: analyzing, evaluation and decision making, which are categories of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 3 recites a judicial exception. Step 2A, Prong 2: Although claim 3 recites a “operation affects one or more functionalities provided by the software application, and operation of the software application”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 3 does not specify how the functionality is affected. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 3, such as: a computer system performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 3 is not patent eligible. Claim 4 recites “wherein the application output is an image presented in a window of a graphical user interface of an endpoint computing system, and the graphical user interface is displayed on a display of the endpoint computing system”. Step 2A, Prong 1: This claim depends from claim 2 and incorporates the abstract idea identified therein. Accordingly, “obtaining information, representing and analyzing, and making a determination” necessarily represents the result of an evaluation, such as: collecting, analyzing, comparing and making decision based on comparison, which constitutes data analysis and decision making, a category of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 4 recites a judicial exception. Step 2A, Prong 2: Although claim 4 recites a “an image displayed in a window” and a “graphical user interface on a display”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 4 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 4, such as: graphical user interface on a display is routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 4 is not patent eligible. Claim 5 recites “wherein the first representational statement is a descriptive statement that describes the image, the second representational statement is another descriptive statement that describes an administrative policy, and the descriptive statement and the another descriptive statement are in a natural language”. Step 2A, Prong 1: This claim depends from claim 4 and incorporates the abstract idea identified therein. Accordingly, “generating a descriptive statement, generating another descriptive statement, natural language” necessarily represents the result of information analysis and characterization, which constitutes data analysis and decision making, a category of abstract ideas. Moreover, the recitation of “computing system” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 5 recites a judicial exception. Step 2A, Prong 2: Although claim 5 recites “descriptive statements are in natural language… describes an administrative policy”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 2 does not recite a specific technical problem in computer technology nor natural language processing. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 5, such as: natural language descriptions without applying the information in a concrete manner. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 5 is not patent eligible. Claim 6 recites “generating the second representational statement”. Step 2A, Prong 1: This claim depends from claim 2 and incorporates the abstract idea identified therein. Accordingly, “generating … representational statement” necessarily represents the result of information creation and analysis, which fall within a category of abstract ideas. Moreover, the recitation of “computing system” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 6 recites a judicial exception. Step 2A, Prong 2: Although claim 6 recites “generating … representational statement”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 6 does not recite a specific technical problem in computer technology nor natural language processing. Instead, the claim merely applies the abstract idea using generic computer components, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 6, such as: generating … representational statement, does not amount to significantly more than the abstract idea. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 6 is not patent eligible. Claim 7 recites “the second representational statement is generated by the machine learning system.”. Step 2A, Prong 1: This claim depends from claim 6 and incorporates the abstract idea identified therein. Accordingly, “representational statement is generated, and performing an operation…” necessarily represents the result of an evaluation, such as: creating, formulating which falls within a category of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 7 recites a judicial exception. Step 2A, Prong 2: Although claim 7 recites a “machine learning system”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 7 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 7, such as: the machine learning system, a representational statement, and generic computer implementation are routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 7 is not patent eligible. Claim 8 recites “the second representational statement is a security policy, and the operation is an access control operation.”. Step 2A, Prong 1: This claim depends from claim 7 and incorporates the abstract idea identified therein. Accordingly, “the second representational statement , security policy, and an access control operation.” necessarily represents the result of an evaluation, such as: collecting, analyzing, characterizing and performing based on the analysis, which falls within a category of abstract ideas. Accordingly, claim 8 recites a judicial exception. Step 2A, Prong 2: Although claim 8 recites a “the second representational statement , security policy, and an access control operation”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 8 does not recite a specific access control mechanism or protocol. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 8, such as: characterizing a representational statement as a security policy and performing an access control operation, do not amount to significantly more than the abstract idea. Accordingly, under Step 2B of the PEG, the claim 8 is not patent eligible. Claim 9 recites “wherein the second representational statement is generated by a conversational machine learning system, and the conversational machine learning system generates the second representational statement, at least in part, by communicating with a security administrator, using the representational language.”. Step 2A, Prong 1: This claim depends from claim 6 and incorporates the abstract idea identified therein. Accordingly, “the second representational statement , a conversational machine learning system, and the representational language” necessarily represents the result of an evaluation, such as: generating information and expressing the information in a representational language, which constitutes data collecting and conversation, a category of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 9 recites a judicial exception. Step 2A, Prong 2: Although claim 9 recites a “conversational machine learning and a security administrator”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 9 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Moreover, claim 9 merely adds human-in-the-loop conversational input to the abstract process of generating and analyzing information, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 9, such as: a machine learning system, a representational statement, and generic computer implementation are routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 9 is not patent eligible. Claim 10 recites: “wherein the representational language is a natural language, and the communicating is performed using the natural language”. Step 2A, Prong 1: This claim depends from claim 9 and incorporates the abstract idea identified therein. Accordingly, “the representational language and natural language” necessarily represents the result of a communicating information and conducting an interaction, such as: human communication and interaction, which falls within a category of abstract ideas. Accordingly, claim 10 recites a judicial exception. Step 2A, Prong 2: Although claim 10 recites a “the representational language and natural language”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 10 does not recite a specific technical problem related to NLP. Instead, the claim merely applies the abstract idea using generic computer components which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 10, such as: the representational language and natural language, do not amount to significantly more than the abstract idea. Accordingly, under Step 2B of the PEG, the claim 10 is not patent eligible. Claim 11 recites “wherein the analysis of the first representational statement against the second representational statement is performed by the machine learning system, and the operation affects execution of the software application by virtue of at least one of the execution of the software application being permitted to continue, or the execution of the software application being terminated”. Step 2A, Prong 1: This claim depends from claim 2 and incorporates the abstract idea identified therein. Accordingly, “the first and second representational statement , the machine learning system, and the execution of the software application…” necessarily represents the result of an analyzing, such as: evaluation of information and making an authorization based on the evaluation, which falls within a category of abstract ideas. Moreover, the recitation of “machine learning” does not remove the claim from the abstract idea category, as it merely describes the use of a generic computational tool to perform the abstract analysis. Accordingly, claim 11 recites a judicial exception. Step 2A, Prong 2: Although claim 11 recites a “conversational machine learning”, these elements are described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 11 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 11, such as: a machine learning system, a representational statement, and generic computer implementation are routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 11 is not patent eligible. Claim 12 recites “producing a determination, wherein the computing resource is a software application, the resource output is output of the software application, the producing the determination comprises performing the analysis by analyzing the representation against the representational statement, and producing the determination based, at least in part, on a result of the analyzing, and the operation is performed in response to the determination”. Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea identified therein. Accordingly, “output of the software application and the representational statement” necessarily represents the result of making a decision , such as: collecting, analyzing, comparing and making decision based on comparison, which constitutes data analysis and decision making, a category of abstract ideas. Accordingly, claim 12 recites a judicial exception. Step 2A, Prong 2: Although claim 12 recites a “computing resource”, this element is described at a high level of generality and do not impose a meaningful on the abstract idea. In particular, claim 12 does not recite a specific technical problem in computer technology. Instead, the claim merely applies the abstract idea using generic computer components and conventional machine learning techniques, which is insufficient to constitute a practical application. Therefore, the abstract idea is not integrated into a practical application. Step 2B: The additional elements recited in claim 12, such as: a software application is routine and conventional component performing their ordinary functions. Therefore, these elements do not add an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. Accordingly, under Step 2B of the PEG, the claim 12 is not patent eligible. The specification describes standard hardware components used in ordinary ways (a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device) for monitoring, computing, and displaying indicators in a virtual environment, indicating that these elements are conventional tools for data collection and UI presentation. See MPEP 2106.05(d). The claims recite these conventional components at a high level without specifying any non-conventional configuration or operation. Conclusion for dependent claims 2-12, the additional limitations do not integrate the abstract idea into a practical application and do not recite an inventive concept that is significantly more than the abstract idea itself. Accordingly, claims 1-12 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL. — The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 9 and 17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 9 recites “wherein the second representational statement is generated by a conversational machine learning system, and the conversational machine learning system generates the second representational statement, at least in part, by communicating with a security administrator, using the representational language”. However, the specification fails to describe a conversational machine learning and how a conversational machine learning generates the second representational statement (i.e., application output analysis process 3900 analyzes the generated representational statement against a second representational statement (3930). This second representational statement serves as a reference point for such a comparison, and comprehends a predefined representation that embodies a desired application output, a representation derived from a previous execution of the same application, the product of a machine learning process (e.g., an LLM machine learning process), or other such representation. An example of such an analysis process is discussed in connection with FIG. 43 , subsequently, see paragraph [0365]). However, there is no disclosure as to how the representational statement is generated through conversational interaction rather than predefined rules. The disclosure is limited to generic machine learning analysis and does not describe a conversational machine learning and how a conversational machine learning generates the second representational statement . The level of detail required to satisfy the written description requirement varies depending on the nature and scope of the claims and on the complexity and predictability of the relevant technology. Ariad, 598 F.3d at 1351, 94 USPQ2d at 1172; Capon v. Eshhar, 418 F.3d 1349, 1357-58, 76 USPQ2d 1078, 1083-84 (Fed. Cir. 2005). Computer-implemented inventions are often disclosed and claimed in terms of their functionality. For computer-implemented inventions, the determination of the sufficiency of disclosure will require an inquiry into the sufficiency of both the disclosed hardware and the disclosed software due to the interrelationship and interdependence of computer hardware and software. The critical inquiry is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date. Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 682. 114 USPQ2d 1349, 1356 (citing Ariad Pharm., Inc. V. Eli Lilly & Co, 598 F.3d 1336, 1351, 94 USPQ2d 1161, 1172 (Fed. Cir. 2010) in the context of determining possession of a claimed means of accessing disparate databases). The same reasons apply to dependent claim 17. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8, 11-16, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Baughman et al. (US 2024/0086729 A1), hereinafter Baughman. Regarding claim 1, Baughman discloses a computer-implemented method, implemented in a computer system, comprising (Baughman, Fig.1): acquiring a resource output, wherein the resource output is an output produced by a computing resource (Baughman, Para 0024, the UX evaluation module 106 can analyze one or more user interface files 130 (which can include a screenshot of an application user interface 132) that implement a front-end UX 202 to identify a UX component 204A-N in the front-end UX 202 that contains information explaining a result output by the AI model 120 in light of the trustworthy AI factor); generating a representation of the resource output, wherein the representation is generated by a machine learning system (Baughman, Para 0027, in combination with optical character recognition (OCR), word embedding, and a feedforward neural network (FNN), to identify a UX component 204A-N in a front-end UX 202 that represents the trustworthy AI factor) and (Baughman, Para 0046, an uncertainty quantification method can include operations 510 and 512 that evaluate the front-end UX using a CNN model trained to identify elements associated with an AI accuracy representation (e.g., a combination of terms associated with an accuracy of AI model output), and an operation 512 that extracts terms from the UX component associated with the representation of AI model accuracy and provide the extracted terms to operation 504 (described below)… the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy), and the machine learning system generates the representation based, at least in part, on the resource output score (Baughman, Para 0045, the uncertainty quantification of the AI model can be performed by evaluating the front-end UX for a representation of AI accuracy and analyzing the calibration of the AI model to determine output accuracy. A trust score for accuracy trustworthiness can be calculated based on the uncertainty quantification) and (Baughman, Para 0046, an uncertainty quantification method can include operations 510 and 512 that evaluate the front-end UX using a CNN model trained to identify elements associated with an AI accuracy representation (e.g., a combination of terms associated with an accuracy of AI model output), and an operation 512 that extracts terms from the UX component associated with the representation of AI model accuracy and provide the extracted terms to operation 504 (described below)… the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy); and in response to an analysis of the representation against a representational statement (Baughman, Para 0033, the UX optimization module 108 evaluates a trust score for a trustworthy AI factor in relation to a front-end UX 202 to determine whether the trust score meets a threshold of disclosure for the trustworthy AI factor. In some embodiments, a threshold value used to evaluate a trust score can be defined by a customer of the trustworthy AI service 104), performing an operation (Baughman, Para 0034, the UX optimization module 108 can modify a front-end UX 202 of an application user interface 132 to include an alternative UX component containing additional (or alternative) information that meets a threshold of disclosure of a trustworthy AI factor), wherein the representational statement is in a representational language (Baughman, Para 0048, operation 404 performs explainability analysis that determines whether the front-end UX includes an easy-to-understand explanation that is within natural language (e.g., an understandable sentence about the prediction) of how and why the AI model generated the prediction). Regarding claim 2, Baughman discloses the method of claim 1, further comprising: producing a determination, wherein the computing resource is a software application (Baughman, Para 0022, the trustworthy AI service 104 can be provided as a service to an application 122 that utilizes an AI model 120 to provide AI results (e.g., predictions, decisions, and other information) to users via an application user interface 132), the resource output is an application output of the software application (Baughman, Para 0022, the trustworthy AI service 104 can be provided as a service to an application 122 that utilizes an AI model 120 to provide AI results (e.g., predictions, decisions, and other information) to users via an application user interface 132), the representation of the resource output is a first representational statement in the representational language (Baughman, Para 0023, these specifications can be coded into application files (e.g., scripts, functions, etc.) to display information explaining a result output by an AI model 120 in the application user interface 132. FIG. 2 illustrates a non-limiting example of a front-end UX 202 that includes UX components 204A, 204B, 204C, and 204N (where N can be any integer representing any number of UX components 204) containing information associated with a result output by an AI model) and (Baughman, Para 0048, operation 404 performs explainability analysis that determines whether the front-end UX includes an easy-to-understand explanation that is within natural language (e.g., an understandable sentence about the prediction) of how and why the AI model generated the prediction), the representational statement is a second representational statement in the representational language (Baughman, Figure 11), the first representational statement is generated by the machine learning system (Baughman, Para 0045, the uncertainty quantification of the AI model can be performed by evaluating the front-end UX for a representation of AI accuracy and analyzing the calibration of the AI model to determine output accuracy. A trust score for accuracy trustworthiness can be calculated based on the uncertainty quantification) and (Baughman, Para 0046, an uncertainty quantification method can include operations 510 and 512 that evaluate the front-end UX using a CNN model trained to identify elements associated with an AI accuracy representation (e.g., a combination of terms associated with an accuracy of AI model output), and an operation 512 that extracts terms from the UX component associated with the representation of AI model accuracy and provide the extracted terms to operation 504 (described below)… the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy), the machine learning system generates the first representational statement based (Baughman, Fig. 9, Para 0055, operation 902 can obtain source code for an AI model (e.g., from a source code repository) and provide the source code to operation 904, which extracts comments embedded in the source code. The comments can be programmer-readable explanations or annotations in the source code, which can be added to the source code with the purpose of making the source code easier for programmers (and others) to understand, and which are generally ignored by compilers and interpreters. Operation 906 inputs the comments (e.g., terms) to a word embedding model configured for a particular trustworthy AI factor, and the word embedding model outputs a confidence level (e.g., confidence interval) indicating whether the comments are semantically related to the trustworthy AI factor), at least in part, on the resource output, the producing the determination comprises the performing the analysis of the first representational statement against a second representational statement (Baughman, Fig. 11, Para 0029, a trust score assigned to a UX component 204A-N identified as describing the fairness of an AI model 120 to output unbiased results can be assigned a fairness trust score that indicates that the information in the UX component 204A-N represents the fairness trustworthy AI factor, and indicates to what degree (e.g., whether a minimum threshold of disclosure is met) the UX component 204A-N describes the fairness of an AI model 120. The UX evaluation module 106 can assign a trust score to each UX component 204A-N in a front-end UX 202 that contains information related to a trustworthy AI factor), and producing the determination based, at least in part, on the analysis, and the operation is performed in response to the determination (Baughman, Para 0033, the UX optimization module 108 evaluates a trust score for a trustworthy AI factor in relation to a front-end UX 202 to determine whether the trust score meets a threshold of disclosure for the trustworthy AI factor). Regarding claim 3, Baughman discloses the method of claim 2, wherein the operation affects one or more functionalities provided by the software application, or the operation of the software application (Baughman, Para 0028, when the UX component 204A-N is found to be inadequate, the trustworthy AI service 104 can augment, remove, or replace the UX component 204A-N, or the trustworthy AI service 104 can append additional trustworthy AI factor information to the UX component 204A-N). Regarding claim 4, Baughman discloses the method of claim 2, wherein the application output is an image presented in a window of a graphical user interface of an endpoint computing system (Baughman, Para 0019, UX developers can help address this challenge by designing front-end UXs (e.g., user-experiences provided in a graphical user interface of an application) that provide explanations about how an AI model came to a decision), and the graphical user interface is displayed on a display of the endpoint computing system (Baughman, Para 0023, these specifications can be coded into application files (e.g., scripts, functions, etc.) to display information explaining a result output by an AI model 120 in the application user interface 132). Regarding claim 5, Baughman discloses the method of claim 4, wherein the first representational statement is a descriptive statement that describes the image (Baughman, Fig. 2), the second representational statement is another descriptive statement that describes an administrative policy (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX), and the descriptive statement and the another descriptive statement are in a natural language (Baughman, Para 0048, operation 404 performs explainability analysis that determines whether the front-end UX includes an easy-to-understand explanation that is within natural language (e.g., an understandable sentence about the prediction) of how and why the AI model generated the prediction. The explanation can be evaluated using natural language processing (NLP) to determine whether the explanation includes a stance, such as a justification (e.g., pro or con), and whether the explanation has a low complexity match to meet a mental model of a use). Regarding claim 6, Baughman discloses the method of claim 2, further comprising: generating the second representational statement (Baughman, Para. 0037, The UX optimization module 108 can incorporate an alternative UX component in a front-end UX 302 of an application user interface 132 by modifying an original UX component to include an improved explanation of an AI result, replacing the original UX component with the alternative UX component, or appending the alternative UX component to the original UX component. For example, as described earlier, the UX evaluation module 106 can determine a boundary of a UX component 204A-N (shown as a dashed line) in a front-end UX 202, and the UX optimization module 108 can use the boundary information to incorporate an alternative UX component into an application user interface 132 displayed on a client device 128. The UX optimization module 108 can use UX component boundary information to perform replacement and append operations). Regarding claim 7, Baughman discloses the method of claim 6, wherein the second representational statement is generated by the machine learning system (Baughman, Para. 0037, The UX optimization module 108 can incorporate an alternative UX component in a front-end UX 302 of an application user interface 132 by modifying an original UX component to include an improved explanation of an AI result, replacing the original UX component with the alternative UX component, or appending the alternative UX component to the original UX component. For example, as described earlier, the UX evaluation module 106 can determine a boundary of a UX component 204A-N (shown as a dashed line) in a front-end UX 202, and the UX optimization module 108 can use the boundary information to incorporate an alternative UX component into an application user interface 132 displayed on a client device 128. The UX optimization module 108 can use UX component boundary information to perform replacement and append operations). Regarding claim 8, Baughman discloses the method of claim 7, wherein the second representational statement is a security policy, and the operation is an access control operation (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX). Regarding claim 11, Baughman discloses the method of claim 2, wherein the analysis of the first representational statement against the second representational statement is performed by the machine learning system (Baughman, Fig. 11, Para 0029, a trust score assigned to a UX component 204A-N identified as describing the fairness of an AI model 120 to output unbiased results can be assigned a fairness trust score that indicates that the information in the UX component 204A-N represents the fairness trustworthy AI factor, and indicates to what degree (e.g., whether a minimum threshold of disclosure is met) the UX component 204A-N describes the fairness of an AI model 120. The UX evaluation module 106 can assign a trust score to each UX component 204A-N in a front-end UX 202 that contains information related to a trustworthy AI factor), and the operation affects execution of the software application by virtue of at least one of the execution of the software application being permitted to continue, or the execution of the software application being terminated (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX). Regarding claim 12, Baughman discloses the method of claim 1, further comprising: producing a determination, wherein the computing resource is a software application (Baughman, Para 0022, the trustworthy AI service 104 can be provided as a service to an application 122 that utilizes an AI model 120 to provide AI results (e.g., predictions, decisions, and other information) to users via an application user interface 132), the resource output is output of the software application, the producing the determination comprises performing the analysis by analyzing the representation against the representational statement(Baughman, Para 0033, the UX optimization module 108 evaluates a trust score for a trustworthy AI factor in relation to a front-end UX 202 to determine whether the trust score meets a threshold of disclosure for the trustworthy AI factor. In some embodiments, a threshold value used to evaluate a trust score can be defined by a customer of the trustworthy AI service 104),, and producing the determination based, at least in part, on a result of the analyzing, and the operation is performed in response to the determination(Baughman, Fig. 11, Para 0029, a trust score assigned to a UX component 204A-N identified as describing the fairness of an AI model 120 to output unbiased results can be assigned a fairness trust score that indicates that the information in the UX component 204A-N represents the fairness trustworthy AI factor, and indicates to what degree (e.g., whether a minimum threshold of disclosure is met) the UX component 204A-N describes the fairness of an AI model 120. The UX evaluation module 106 can assign a trust score to each UX component 204A-N in a front-end UX 202 that contains information related to a trustworthy AI factor) and (Baughman, Para 0034, the UX optimization module 108 can modify a front-end UX 202 of an application user interface 132 to include an alternative UX component containing additional (or alternative) information that meets a threshold of disclosure of a trustworthy AI factor). Regarding claim 13, the claim is interpreted and rejected for the same rational set forth in claim 1. Regarding claim 14, the claim is interpreted and rejected for the same rational set forth in claim 2. Regarding claim 15, Baughman discloses the non-transitory computer-readable storage medium of claim 14, wherein the application output is an image presented in a window of a graphical user interface of an endpoint computing system (Baughman, Para 0019, UX developers can help address this challenge by designing front-end UXs (e.g., user-experiences provided in a graphical user interface of an application) that provide explanations about how an AI model came to a decision), and the graphical user interface is displayed on a display of the endpoint computing system (Baughman, Para 0023, these specifications can be coded into application files (e.g., scripts, functions, etc.) to display information explaining a result output by an AI model 120 in the application user interface 132), wherein the first representational statement is a descriptive statement that describes the image (Baughman, Fig. 2), the second representational statement is another descriptive statement that describes an administrative policy (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX), and the descriptive statement and the another descriptive statement are in a natural language (Baughman, Para 0048, operation 404 performs explainability analysis that determines whether the front-end UX includes an easy-to-understand explanation that is within natural language (e.g., an understandable sentence about the prediction) of how and why the AI model generated the prediction. The explanation can be evaluated using natural language processing (NLP) to determine whether the explanation includes a stance, such as a justification (e.g., pro or con), and whether the explanation has a low complexity match to meet a mental model of a use). Regarding claim 16, Baughman discloses the non-transitory computer-readable storage medium of claim 14, wherein the method further comprises: generating the second representational statement (Baughman, Para. 0037), wherein the second representational statement is generated by the machine learning system (Baughman, Para. 0037, The UX optimization module 108 can incorporate an alternative UX component in a front-end UX 302 of an application user interface 132 by modifying an original UX component to include an improved explanation of an AI result, replacing the original UX component with the alternative UX component, or appending the alternative UX component to the original UX component. For example, as described earlier, the UX evaluation module 106 can determine a boundary of a UX component 204A-N (shown as a dashed line) in a front-end UX 202, and the UX optimization module 108 can use the boundary information to incorporate an alternative UX component into an application user interface 132 displayed on a client device 128. The UX optimization module 108 can use UX component boundary information to perform replacement and append operations), the second representational statement is a security policy, and the operation is an access control operation (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX). Regarding claim 18, Baughman discloses the non-transitory computer-readable storage medium of claim 14, wherein the analysis of the first representational statement against the second representational statement is performed by the machine learning system (Baughman, Fig. 11, Para 0029, a trust score assigned to a UX component 204A-N identified as describing the fairness of an AI model 120 to output unbiased results can be assigned a fairness trust score that indicates that the information in the UX component 204A-N represents the fairness trustworthy AI factor, and indicates to what degree (e.g., whether a minimum threshold of disclosure is met) the UX component 204A-N describes the fairness of an AI model 120. The UX evaluation module 106 can assign a trust score to each UX component 204A-N in a front-end UX 202 that contains information related to a trustworthy AI factor), and the operation affects execution of the software application by virtue of at least one of the execution of the software application being permitted to continue, the execution of the software application being terminated (Baughman, Para 0046, as a non-limiting example, the CNN can evaluate a front-end UX file (e.g., a screenshot, hypertext markup language (HTML) file, or another file) using OCR and word embedding to identify key words associated with a description of AI accuracy (UX component), and the UX component can be input to an FNN to obtain a probability (representation score) that the UX component is associated with AI accuracy. As part of identifying the UX component, operation 514 can determine the boundaries of the UX component in the front-end UX using an object detection technique (e.g., you only look once (YOLO) real time object detection) to enable modification or replacement of the UX component in the front-end UX), one or more functionalities provided by the software application, or the operation of the software application (Baughman, Para 0028, when the UX component 204A-N is found to be inadequate, the trustworthy AI service 104 can augment, remove, or replace the UX component 204A-N, or the trustworthy AI service 104 can append additional trustworthy AI factor information to the UX component 204A-N). Regarding claim 19, the claim is interpreted and rejected for the same rational set forth in claim 12. Regarding claim 20, the claim is interpreted and rejected for the same rational set forth in claims 1 and 13. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9-10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Baughman et al. (US 2024/0086729 A1), hereinafter Baughman in view of Chan (KR 102580835 B1), hereinafter Chan. Regarding claim 9, Baughman does not explicitly disclose the method of claim 6, wherein the second representational statement is generated by a conversational machine learning system, and the conversational machine learning system generates the second representational statement, at least in part, by communicating with a security administrator, using the representational language. However, Chan teaches wherein the second representational statement is generated by a conversational machine learning system (Chan, Para. 0008, the security policy automation management system that establishes a security policy according to log data monitored from the target network device and manages the device according to an embodiment of the present invention is based on the log data and the security policy) and (Chan, Para, 0064), and the conversational machine learning system generates the second representational statement, at least in part, by communicating with a security administrator, using the representational language (Chan, Para. 0008, a generator that collects the traffic status of the target network equipment and generates question-and-answer information for each security policy through natural language generation (NLG), and provides a chatbot conversation service using the question-and-answer information for each security policy to an employee terminal. An input unit that receives a query request sentence as provided, and one of the question-and-answer information for each security policy based on context information obtained by converting and processing the query request sentence through natural language understanding (NLU). It includes a decision unit that determines and an integrated management unit that feeds back corresponding response information extracted from the single question-response information to the employee terminal through the chatbot conversation service). Baughman and Chan considered to be analogous to the claim invention because they are in the same field of conventional machine learning systems for security policy generation and administration. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filing date of the claimed invention to have modified Baughman to incorporate the teachings of Chan to include wherein the second representational statement is generated by a conversational machine learning system (Chan, Para. 0008) and (Chan, Para, 0064), and the conversational machine learning system generates the second representational statement, at least in part, by communicating with a security administrator, using the representational language (Chan, Para. 0008 ). Doing so would aid to protect core technical documents and personal information that should not be leaked from inside the company to the outside. For information security, Data Leakage Prevention (DLP) solutions and Digital Rights Management (Digital Rights Management) are used. ; DRM)-based document security solution, personal information protection solution, and output control solution are used to protect internal information by separating the intranet, which is the internal business network, and the Internet network, which is the external network, and prevent intrusions from the external network through a network connection system. Internal information is protected by protecting the internal network and communicating with the external network to avoid disruption to business (Chan, Para. 0003). Regarding claim 10, the combination of Baughman in view of Chan teaches the method of claim 9, wherein the representational language is a natural language, and the communicating is performed using the natural language (Baughman, Para 0048, operation 404 performs explainability analysis that determines whether the front-end UX includes an easy-to-understand explanation that is within natural language (e.g., an understandable sentence about the prediction) of how and why the AI model generated the prediction). Regarding claim 17, the claim is interpreted and rejected for the same rational set forth in claim 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GITA FARAMARZI whose telephone number is (571)272-0248. The examiner can normally be reached Monday- Friday 9:00 am- 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge L. Ortiz-Criado can be reached at (571) 272-7624. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GITA FARAMARZI/Examiner, Art Unit 2496 /JORGE L ORTIZ CRIADO/Supervisory Patent Examiner, Art Unit 2496
Read full office action

Prosecution Timeline

Aug 30, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12339997
ENTITY FOCUSED NATURAL LANGUAGE GENERATION
2y 5m to grant Granted Jun 24, 2025
Patent 12316648
Data value classifier
2y 5m to grant Granted May 27, 2025
Patent 12301564
VIRTUAL SESSION ACCESS MANAGEMENT
2y 5m to grant Granted May 13, 2025
Patent 12256022
BLOCKCHAIN TRANSACTION COMPRISING RUNNABLE CODE FOR HASH-BASED VERIFICATION
2y 5m to grant Granted Mar 18, 2025
Patent 12242613
AUTOMATED EVALUATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
75%
With Interview (+21.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 75 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month