Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,519

PLATFORM FOR ENTERPRISE ADOPTION AND IMPLEMENTATION OF GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS

Non-Final OA §101§102§103
Filed
Sep 07, 2023
Examiner
KARTHOLY, REJI P
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solutions Limited
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
97 granted / 151 resolved
+9.2% vs TC avg
Strong +72% interview lift
Without
With
+71.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
18 currently pending
Career history
169
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This Office Action is in response to Applicant's Communication received on 09/07/2023 for application number 18/462,519 . Claims 1-20 are presented for examination. Claims 1, 8 , and 15 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement s (IDS) submitted on 01/19/2024 and 01/30/2025 ha ve been considered by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claim s 1-7 are directed to a method , Claims 8-14 are directed to a system , and Claims 15-20 are directed to media . Thus, the claims fall within one of the statutory categories (process , machine, articles of manufacture) and are eligible under Step 1. Step 2A Prong 1 Independent Claims Claim s 1 , 8, and 15 recite: processing at least a portion of the request to generate a prompt that is responsive to the request - these limitations encompass a mental process of analyzing a request and writing down a prompt/ command that is responsive to the request, which is observing, evaluating and judging that is practically capable of being performed in the human mind or by a human using a pen and paper. Accordingly, these claims recite an abstract idea that falls under the “ mental process ” grouping. Step 2A Prong 2 Independent Claims Additional elements Claim s 1 , 8, and 15 recite: receiving, by a GAI integration platform, a request from an application executed by an enterprise system of an enterprise, the application being executed remotely from the GAI integration platform; transmitting, by the GAI integration platform, the prompt to a GAI system of a plurality of GAI systems; receiving, by the GAI integration platform, a response from the GAI system, the response comprising content generated by the GAI system in response to the prompt; and transmitting, by the GAI integration platform, the response to the application - these limitations amount to insignificant extra-solution activity of mere data gathering (see MPEP § 2106.05(g)) . through a control tier of the GAI integration platform, through a set of modules, the set of modules comprising one or more of a prompt template module, a prompt quality module, and a personally identifiable information (PII) detection module - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)) . Claim 1 recites: a computer-implemented method for remote integration of generative artificial intelligence (GAI) systems to enterprise systems - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Claim 8 recites: a system, comprising: one or more processors; and a computer-readable storage device coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Claim 15 recites: c omputer-readable storage media coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. These claims are directed to the abstract idea. Step 2B Independent Claims Additional elements Claim s 1 , 8, and 15 recite: receiving, by a GAI integration platform, a request from an application executed by an enterprise system of an enterprise, the application being executed remotely from the GAI integration platform; transmitting, by the GAI integration platform, the prompt to a GAI system of a plurality of GAI systems; receiving, by the GAI integration platform, a response from the GAI system, the response comprising content generated by the GAI system in response to the prompt; and transmitting, by the GAI integration platform, the response to the application - these limitations amount to insignificant extra-solution activity of mere data gathering, which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data”) . through a control tier of the GAI integration platform, through a set of modules, the set of modules comprising one or more of a prompt template module, a prompt quality module, and a personally identifiable information (PII) detection module - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)) . Claim 1 recites: a computer-implemented method for remote integration of generative artificial intelligence (GAI) systems to enterprise systems - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Claim 8 recites: a system, comprising: one or more processors; and a computer-readable storage device coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Claim 15 recites: c omputer-readable storage media coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)). Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, these claims are patent ineligible. Step 2A Prong 1 Dependent Claims Claims 2, 9, and 16: processing at least a portion of the request to generate a prompt that is responsive to the request comprises populating a prompt template at least partially based on data provided in a payload of the request - this limitation merely furthers the mental process by specifying generating the prompt/ command that is responsive to the request. Claims 3, 10, and 17: processing at least a portion of the request to generate a prompt that is responsive to the request comprises determining context data representative of one or more of the enterprise and an enterprise operation, and providing the prompt as a few-shot prompt that includes at least a portion of the context data - this limitation merely furthers the mental process by specifying generating the prompt/ command that is responsive to the request. Claims 4, 11, and 18: processing at least a portion of the request to generate a prompt that is responsive to the request comprises determining context data from at least one external source based on data provided in a payload of the request, and providing the prompt as a few-shot prompt that includes at least a portion of the context data - this limitation merely furthers the mental process by specifying generating the prompt/ command that is responsive to the request. Claims 5, 12, and 19: one or more of the request and the prompt is processed to mitigate presence of one or more of PII and profanity before transmitting the prompt to the GAI system - this limitation merely furthers the mental process by specifying the generated prompt/ command . Claims 6, 13, and 20: logging interaction data representative of requests from and responses to the application - this limitation encompasses mental process of noting down interaction data. Thus, the claims recite the abstract idea. Step 2A Prong 2 Dependent Claims Additional elements Claims 2, 9, and 16: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 3, 10, and 17: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 4, 11, and 18: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 5, 12, and 19: by the GAI integration platform - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 6, 13, and 20: providing one or more dashboards that graphically depict at least a portion of the interaction data - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting (see MPEP § 2106.05(g)). Claims 7 and 14 : a GAI model of the GAI system is fine-tuned based on enterprise data provided by the enterprise - these limitations are recited at a high level of generality such that it amount to no more than generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)). Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to the abstract idea. Step 2B Dependent Claims Additional elements Claims 2, 9, and 16: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 3, 10, and 17: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 4, 11, and 18: through a control tier of the GAI integration platform, through a set of modules - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 5, 12, and 19: by the GAI integration platform - these limitations are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)). Claims 6, 13, and 20: providing one or more dashboards that graphically depict at least a portion of the interaction data - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting, which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data”, “presenting offers”). Claims 7 and 14 : a GAI model of the GAI system is fine-tuned based on enterprise data provided by the enterprise - these limitations are recited at a high level of generality such that it amount to no more than generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)). Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, the claims are patent ineligible. Claim s 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim s do not fall within at least one of the four categories of patent eligible subject matter because the claim s are directed to signals per se . Claims 15-20 recite “computer readable storage media”. Specification [0066] indicates that the computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. According to MPEP 2111, claim terms and phrases must be given their broadest reasonable interpretation in light of specification. Thus, the phrase “computer readable storage media” will be reasonably interpreted as a medium including signals. Signal, a form of energy, does not fall within one of the four statutory classes of 35 U.S.C. §101. Thus, claims 15-20 are directed to a nonstatutory subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 , 3-5, 7-8, 10-12, 14-15, and 17-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Austin et al. (US 2024/0420012 A1 hereinafter Austin ). Regarding Claim 1 , Austin teaches a computer-implemented method for remote integration of generative artificial intelligence (GAI) systems to enterprise systems ([0027] orchestration platform facilitate usage of generative AI systems in a safe and democratized manner; the orchestration platform can safely “teach” LLM(s) at scale about a given enterprise such that enterprise users can safely ask questions and receive safe output responses) , the method comprising: receiving, by a GAI integration platform, a request from an application executed by an enterprise system of an enterprise, the application being executed remotely from the GAI integration platform ([0027] orchestration platform facilitate usage of generative AI systems in a safe and democratized manner; the orchestration platform can safely “teach” LLM(s) at scale about a given enterprise such that enterprise users can safely ask questions and receive safe output responses; [0033] FIG. 1 illustrates environment 100 for generative AI orchestration; [0034] the enterprise system 104 (i.e., an enterprise system of an enterprise)correspond to an enterprise and include one or more computing devices; [0036] the orchestration platform 102 configured as an intelligent system; the orchestration platform 102 may be privately hosted - fig. 1 shows the enterprise system remote from the generative orchestration platform; [0072] FIG. 2G illustrates a method performed by the orchestration platform 102 (i.e., GAI integration platform); [0073] the orchestration platform 102 perform one or more operations that include obtaining a user query, user query is submitted by an authenticated user or bot (i.e., request from an application executed by an enterprise system)) ; processing, through a control tier of the GAI integration platform, at least a portion of the request through a set of modules to generate a prompt that is responsive to the request, the set of modules comprising one or more of a prompt template module, a prompt quality module, and a personally identifiable information (PII) detection module ([0074] based on the obtaining, evaluating a context of the user query; the orchestration platform 102 can, similar to that described above with respect to FIG. 2B, perform one or more operations, that include, based on the obtaining, evaluating a context of the user query; [0075] causing the user query to be routed to one or more of the processing pipelines in accordance with the context; [0076] based on the one or more of the processing pipelines to which the user query is routed, generating one or more curated LLM prompts for the user query; [0077] combining the one or more curated LLM prompts and the one or more extracts with the user query, resulting in a modified query (i.e., generating prompt that is responsive to the request); [0042] FIG. 2B is a flow diagram illustrating query orchestration by the generative AI orchestration platform 102; [0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question - thus, the generative AI orchestration platform processes the user query through context evaluation functionality 102e, user access & authorization management functionality 102a, prompt generation functionality 102g, etc. to generate prompt (i.e., generative AI orchestration platform/ GAI integration platform processes user query through control tier including one or more of a prompt template module, a prompt quality module, and a personally identifiable information (PII) detection module)) ; transmitting, by the GAI integration platform, the prompt to a GAI system of a plurality of GAI systems ([0077] combining the one or more curated LLM prompts and the one or more extracts with the user query, resulting in a modified query; [0078] performing response generation by submitting the modified query to the one or more generative AI LLMs (i.e., GAI system of a plurality of GAI systems) that correspond to the one or more of the processing pipelines to which the user query has been routed, so as to derive a response to the user query) ; receiving, by the GAI integration platform, a response from the GAI system, the response comprising content generated by the GAI system in response to the prompt ([0078] performing response generation by submitting the modified query to the one or more generative AI LLMs (i.e., GAI system of a plurality of GAI systems) that correspond to the one or more of the processing pipelines to which the user query has been routed, so as to derive a response to the user query; similar to that described above with respect to FIG. 2B, perform one or more operations that include performing response generation; [0046] as shown in fig. 2B, an LLM prompt directed to a particular generative AI LLM 106, and constitute “additional instructions” on how the question should be asked to the generative AI LLM 106 and/or how the generative AI LLM 106 should answer the question; [0048] a response generation functionality 102 n obtain the combined prompt and transmit it to the relevant generative AI LLM(s) 106 for generating answers; [0049] the generated answer may then be checked against copyright, plagiarism, and/or company ethical/bias policies and subsequently provide the answer to the user. See fig. 2B - it shows the response generated by the response generation 102n/ LLM (i.e., GAI system) received by the generative AI orchestration platform (i.e., GAI integration platform)) ; and transmitting, by the GAI integration platform, the response to the application ([0049] the generated answer may then be checked against copyright, plagiarism, and/or company ethical/bias policies and subsequently provide the answer to the user - 102 w, 102 x, 102 y, and 102 z of FIG. 2C. See figs. 2B and 2C - it shows the response generated by the response generation 102n/ LLM received by the generative AI orchestration platform (i.e., GAI integration platform) and the generative AI orchestration platform outputting the response to the authenticated enterprise user/ bot (i.e., transmitting response to the application)) . As to dependent Claim 3 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein processing, through a control tier of the GAI integration platform, at least a portion of the request through a set of modules to generate a prompt that is responsive to the request ([0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question- thus, the generative AI orchestration platform processes the user query through context evaluation functionality 102e, user access & authorization management functionality 102a, prompt generation functionality 102g, etc. to generate prompt (i.e., generative AI orchestration platform/ GAI integration platform processes user query through control tier including a set of modules) comprises determining context data representative of one or more of the enterprise, and an enterprise operation and providing the prompt as a few-shot prompt that includes at least a portion of the context data ([0043] the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question; customization of a prompt may be a function of previous questions (e.g., maintaining context for multi-turn questions) (i.e., determining context) and may involve phrasing the question in the context of the relevant company user persona (i.e., few-shot prompt including at least a portion of the context data)) . As to dependent Claim 4 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein processing, through a control tier of the GAI integration platform, at least a portion of the request through a set of modules to generate a prompt that is responsive to the request ([0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question- thus, the generative AI orchestration platform processes the user query through context evaluation functionality 102e, user access & authorization management functionality 102a, prompt generation functionality 102g, etc. to generate prompt (i.e., generative AI orchestration platform/ GAI integration platform processes user query through control tier including a set of modules) comprises determining context data from at least one external source based on data provided in a payload of the request ([0043] query orchestration may begin with an authenticated user/bot submitting a question, where context evaluation functionality 102 e (“determines context”)examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question (e.g., if it is an HR question, the context evaluation functionality 102 e may select an HR policy path for query traversal); [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question; customization of a prompt may be a function of previous questions (e.g., maintaining context for multi-turn questions) (i.e., determining context) and may involve phrasing the question in the context of the relevant company user persona (i.e., context data from external source/ company information based on the payload of the request/ data provided in the question submitted by authenticated user/bot)) , and providing the prompt as a few-shot prompt that includes at least a portion of the context data ([0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question; customization of a prompt may be a function of previous questions (e.g., maintaining context for multi-turn questions) (i.e., determining context) and may involve phrasing the question in the context of the relevant company user persona (i.e., few-shot prompt including at least a portion of the context data)) . As to dependent Claim 5 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein one or more of the request and the prompt is processed by the GAI integration platform to mitigate presence of one or more of PII and profanity before transmitting the prompt to the GAI system ([0017] sensitive personal information (e.g., name, age, phone number, social security number, etc.); [0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question; LLM prompt may be directed to a particular generative AI LLM (i.e., GAI system); [0051] when a question is asked, the context evaluation functionality 102 e (“determines context”) of the orchestration platform 102 evaluate the question and determine if the question contains sensitive information; if it does, that portion can be redacted at the outset or the question can be rejected with notification - thus, the orchestration platform/ GAI integration platform mitigate presence of one or more of PII and profanity before transmitting the prompt to the GAI system) . As to dependent Claim 7 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein a GAI model of the GAI system is fine-tuned based on enterprise data provided by the enterprise ([0027] orchestration platform facilitate usage of generative AI systems in a safe and democratized manner; the orchestration platform can safely “teach” LLM(s) at scale about a given enterprise such that enterprise users can safely ask questions and receive safe output responses; [0050] the user may provide feedback regarding the answer, which the system can utilize, in a reinforcement learning with human feedback (RLHF) process to improve future response accuracy; RLHF feedback may be captured by telemetry and curated as RLHF information for influencing a model's responses - thus, the model is fine-tuned based on the enterprise users' feedback (i.e., enterprise data provided by the enterprise)). Claims 8, 10-12, and 14 are system claims corresponding to the method claims 1, 3-5, and 7 respectively and therefore, rejected for the same reasons. Austin further teaches wherein a system, comprising: one or more processors ([0028] a device, comprising a processing system including a processor, and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations) ; and a computer-readable storage device coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations ([0028] a device, comprising a processing system including a processor, and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations) . Claims 15 and 17-19 are medium claims corresponding to the method claims 1 and 3-5 respectively and therefore, rejected for the same reasons. Austin further teaches wherein computer-readable storage media coupled to the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations ([0030] a non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 6, 9, 13, 16, and 20 are r ejected under 35 U.S.C. 103 as being unpatentable over Austin in view of Siebel et al. (US 2024/0202225 A1 hereinafter Siebel ) . As to dependent Claim 2 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein processing, through a control tier of the GAI integration platform, at least a portion of the request through a set of modules to generate a prompt that is responsive to the request ([0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; the context evaluation functionality 102 e employ AI-based logic that is capable of understanding user questions and determining the path or pipeline for routing the question; [0046] the user access & authorization management functionality 102 a check the authorization level of the authenticated user/bot; a prompt generation functionality 102 g tailor or customize one or more LLM prompts for the question- thus, the generative AI orchestration platform processes the user query through context evaluation functionality 102e, user access & authorization management functionality 102a, prompt generation functionality 102g, etc. to generate prompt (i.e., generative AI orchestration platform/ GAI integration platform processes user query through control tier including a set of modules). However, Austin fails to expressly teach wherein the processing comprises populating a prompt template at least partially based on data provided in a payload of the request. In the same field of endeavor, Siebel teaches wherein the processing comprises populating a prompt template at least partially based on data provided in a payload of the request ([0138] input such as, request, query, can be input in various natural forms for easy human interaction; [0110] the comprehension module 510 generate a prompt template for processing an initial input, a prompt template for processing iterative inputs, and another prompt template for the output result phase; prompt templates can be modified to generate prompts - thus, processing the input request/ data provided in a payload of the request using the prompt template) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein the processing comprises populating a prompt template at least partially based on data provided in a payload of the request, as taught by Siebel into Austin. Doing so would be desirable because it would allow for efficiently processing a wide variety of inputs received from disparate data sources and return results in a common data form (Siebel [0022]). As to dependent Claim 6 , Austin teaches all the limitations of Claim 1. Austin further teaches wherein logging interaction data representative of requests from and responses to the application ([0043] referring to FIG. 2B, query orchestration begin with an authenticated user/bot submitting a question; examines the question and determines the type of (e.g., private) company documents that may be relevant for generating answers to the question; [0049] the generated answer may then be checked against copyright, plagiarism, and/or company ethical/bias policies and subsequently provide the answer to the user; [0050] as shown in FIG. 2B, the user may provide feedback regarding the answer, which the system can utilize, in a reinforcement learning with human feedback (RLHF) process to improve future response accuracy - thus, logging interaction data representative of requests and answers). Austin does not explicitly disclose wherein providing one or more dashboards that graphically depict at least a portion of the interaction data. However, Austin discloses that as shown in FIG. 2B, the user may provide feedback regarding the answer (see [0050]), which implies graphically depicting the interaction data. Alternatively, in the same field of endeavor, Siebel teaches wherein providing one or more dashboards that graphically depict at least a portion of the interaction data ([0130] the model optimization module 526 may tune the comprehension module 510 and/or orchestrator module 504 (and/or models thereof) based on tracking user interactions within systems, capturing explicit feedback through a training user interface, implicit feedback, and/or the like - thus, the user interface/ dashboard displaying the interaction data and capturing feedback). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein providing one or more dashboards that graphically depict at least a portion of the interaction data, as taught by Siebel into Austin. Doing so would be desirable because using the feedback would improve the accuracy and/or reliability of the system (Siebel [0171]). Claims 9 and 13 are system claims corresponding to the method claims 2 and 6 respectively and therefore, rejected for the same reasons. Claims 16 and 20 are medium claims corresponding to the method claims 2 and 6 respectively and therefore, rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 CFR § 1.111(c) to consider these references fully when responding to this action. Tebbe ( US 2015/0066788 A1) teaches: receiving a request from a business system to interact with a social media system executing on a social media platform; integrating, using the social media integration platform, the request between the business system and the social media system by translating the request from the business system to a social media request compliant with the social media platform ; sending the social media request to the social media system (see [0003]) . Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Value for firstName-middleName-lastName?" \* MERGEFORMAT REJI KARTHOLY whose telephone number is FILLIN "Insert your individual area code and phone number." \* MERGEFORMAT (571)272-3432 . The examiner can normally be reached on FILLIN "Work schedule?" \* MERGEFORMAT Monday - Thursday from 7:30 am to 3:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "Insert your SPE’s name." \* MERGEFORMAT Jennifer Welch , can be reached at telephone number FILLIN "Insert your SPE’s area code and phone number." \* MERGEFORMAT 571-272-7212 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /REJI KARTHOLY/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585963
METHOD AND DEVICE FOR LEARNING A STRATEGY AND FOR IMPLEMENTING THE STRATEGY
2y 5m to grant Granted Mar 24, 2026
Patent 12585988
SYSTEMS AND METHODS FOR GENERATING AND APPLYING A SECURE STATISTICAL CLASSIFIER
2y 5m to grant Granted Mar 24, 2026
Patent 12572395
Method and Devices for Latency Compensation
2y 5m to grant Granted Mar 10, 2026
Patent 12572846
SYSTEM AND METHOD FOR DEVICE ATTRIBUTE IDENTIFICATION BASED ON HOST CONFIGURATION PROTOCOLS
2y 5m to grant Granted Mar 10, 2026
Patent 12569702
RADIOTHERAPY METHODS, SYSTEMS, AND WORKFLOW-ORIENTED GRAPHICAL USER INTERFACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+71.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month