Prosecution Insights
Last updated: April 19, 2026
Application No. 18/581,439

Computer-Implemented Methods and Computer Systems for Artificial Intelligence (AI) Based Automated Provision of Management Consulting

Final Rejection §101§103
Filed
Feb 20, 2024
Examiner
ALSTON, FRANK MAURICE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Proai Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 16 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
40.6%
+0.6% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
2.6%
-37.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a Final Action on the merits in response to the application filed on 11/21/2025. Claims 2 – 20, and 22 – 28 are cancelled. Claims 1 and 21, are pending in this application. Examiner’s Response to Remarks Examiner’s Response to Rejections Examiner’s Response Claim Rejections under 35 U.S.C. § 101; Examiner’s Response Claim Rejections under 35 U.S.C. § 103. Examiner’s Response Claim Rejections under 35 U.S.C. § 101. Applicant respectfully traverses the rejection of claims 1-28 under 35 U.S.C. § 101; and argues the amended claims 1 and 21 are directed to specific technological improvements in the operation and training of Artificial Intelligence (AI) systems, and therefore are not directed to an abstract idea. Examiner respectfully disagrees. Applicant’s claims are not directed to a specific technological improvement in the operation and training of Artificial Intelligence systems. Applicant’s Claims 1 – 28 and amended claims 1 and 21, are directed to an abstract idea, and recite certain methods of organizing human activity. For instance, claim 1 is dividing the provision of management consulting into a plurality of tasks and assigns the tasks; identifies a logic model; distilling includes a manual intervention stage that edits or approves intermediate steps; providing characteristic reference data; enabling exchanges of data; and generating a plurality of deliverables in a plurality of user-readable formats, the plurality of deliverables collectively constituting management consulting that includes a manual intervention stage that edits or approves intermediate rationale steps; providing characteristic reference data; enabling exchanges of data; and generating a plurality of deliverables in a plurality of user-readable formats, the plurality of deliverables collectively constituting management consulting are commercial interactions that involve human interaction with a computer. Claim 1 does not integrate the judicial exception into a practical application nor do the limitations use the judicial exception in some other meaningful way beyond linking the judicial exception to a technological environment. The additional elements recited are merely generic computer components performing generic computer functions. Even though Applicant has amended claims 1 and 21, to recite at least one of said AI agents is customized by a fine-tuning process comprising: identifying a logic model for implementation of an inner monologue, said logic model selected from the group consisting of Chain-of-Thought prompting and Tree-of-Thought prompting, implementing the inner monologue to the identified AI agent; and fine-tuning the inner monologue by distilling a step-by-step method using a custom-generated code segment to train said AI agent for a rationale generation task in addition to a label prediction task, the amendments do not take claim one out of certain methods of organizing human activity and do not add significantly more to provide for an inventive concept. Applicant’s claim 1 is merely linking the judicial exception to a technological environment to resolve a business problem. Claim 21 is substantially similar and recites the same subject matter as claim 1 and also recites the same abstract idea. Accordingly, claims 1 – 28, are rejected and amended claims 1 and 21 are rejected under 35 U.S.C. § 101. Examiner’s Response Claim Rejections under 35 U.S.C. § 103. Applicant respectfully traverses the rejection that amended independent claims 1 and 21 are obvious over the cited prior art; and argues the cited combination fails to teach or suggest several key limitations of the presently amended independent claims, particularly those directed to (i) identifying and implementing a logic model for an inner monologue (Chain-of-Thought Tree-of- Thought), and (ii) fine-tuning the AI agent by distilling a step-by-step method to train for a rationale-generation task in addition to a label-prediction task, including a manual intervention stage as now expressly recited in Claims 1 and 21. Examiner respectfully disagrees. Applicant has amended independent claims 1 and 21. A new search was necessitated due to the amendments to the independent claims and new art has been applied to the amended claims. Accordingly, claims 1 – 28 remain rejected under 35 U.S.C. § 103. Claim Rejections – 35 U.S.C. §101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 21 are rejected under 35 U.S.C. §101 because the claimed invention is directed towards an abstract idea without significantly more. Claims 1 and 21 recites: dividing the provision of management consulting into a plurality of tasks, the plurality of tasks defined as a plurality of respective workflows; assigning a plurality of tasks; identifying a logic model; distilling includes a manual intervention stage that edits or approves intermediate rationale steps; providing characteristic reference data; enabling exchanges of data; and generating a plurality of deliverables in a plurality of user-readable formats, the plurality of deliverables collectively constituting management consulting. The limitations of claim 1, under its broadest reasonable interpretation recites certain methods of organizing human activity. The claim particularly recites commercial interactions where there is management of interactions between a human and a computer, as we have dividing the provision of management consulting into a plurality of tasks; the plurality of tasks defined as a plurality of respective workflows; assigning a plurality of tasks; identifying a logic model; distilling includes a manual intervention stage that edits or approves intermediate rationale steps; providing characteristic reference data; enabling exchanges of data; and generating a plurality of deliverables in a plurality of user-readable formats, the plurality of deliverables collectively constituting management consulting. Accordingly, claim 1 recites the abstract idea certain methods of organizing human activity. These judicial exceptions are not integrated into a practical application. Claim 1 recites the additional elements of artificial intelligence agents, interfaces, data sources, a processor, at least one of said AI agents is customized by a fine-tuning process comprising: identifying a logic model for implementation of an inner monologue, said logic model selected from the group consisting of Chain-of-Thought prompting and Tree-of-Thought prompting, implementing the inner monologue to the identified AI agent; and fine-tuning the inner monologue by distilling a step-by-step method using a custom-generated code segment to train said AI agent for a rationale generation task in addition to a label prediction task. Claim 21 recites substantially similar subject matter as claim 1, and also includes the additional elements of a computer system, a memory unit, a processor, data sources, interfaces, at least one of said AI agents is customized by a fine-tuning process comprising: identifying a logic model for implementation of an inner monologue, said logic model selected from the group consisting of Chain-of-Thought prompting and Tree-of-Thought prompting, implementing the inner monologue to the identified AI agent; and fine-tuning the inner monologue by distilling a step-by-step method using a custom-generated code segment to train said AI agent for a rationale generation task in addition to a label prediction task. The additional elements of artificial intelligence agents, interfaces, data sources, a memory unit, a processor, a computer system, at least one of said AI agents is customized by a fine-tuning process comprising: identifying a logic model for implementation of an inner monologue, said logic model selected from the group consisting of Chain-of-Thought prompting and Tree-of-Thought prompting, implementing the inner monologue to the identified AI agent; and fine-tuning the inner monologue by distilling a step-by-step method using a custom-generated code segment to train said AI agent for a rationale generation task in addition to a label prediction task are considered generic computer components, as in “a group of several connected devices” performing generic computer functions and per Applicant’s Specifications shown below: “[0064] In the context of the specification, the phrase "communication network" refers to a group of several connected devices including computing devices (such as desktops, mobile handheld devices, tablet PCs, notebooks, etc.), local and remotely located servers (such as web servers, application servers, database servers, Application Program Interface (API) servers, load balancers, compute nodes, and the like), routers, antennas, modems, multiplexers, demultiplexers, and the like. In that regard, the aforementioned connected devices may be able to exchange data signals through wired and/or wireless means as per several combinations of several different communication protocols such as 802.11 (Wi- Fi), 802.3 (Ethernet), Bluetooth, NFC, ZigBee and 3GPP protocols such as HSPA, HSDPA, LTE, GSM, CDMA, WLL and the like.” and thus are not practically integrated nor significantly more. The additional limitations are no more than mere instructions to apply the exception using generic computer components (e.g., processor). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The combination of these additional elements are no more than mere instructions to apply the exception using generic computer components (e.g., processor). Therefore, the additional elements do not integrate the abstract ideas into a practical application because the additional elements do not impose meaningful limits on practicing the idea and amount to no more than mere instructions using generic computer components to merely link the use of a judicial exception to a particular technological environment or field of use. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Looking at these limitations as an ordered combination and individually, add nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they merely link the use of a judicial exception to a particular technological environment or field of use to use generic computer components, to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amount to significantly more than the abstract idea itself. Therefore, claims 1 and 21 are not patent eligible. Claim Rejections – 35 U.S.C. §103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. 5. Claims 1 and 21, are rejected under 35 U.S.C. 103 as being unpatentable over Siebel, Thomas M et al. (AU Publication No. 2022297419) hereinafter “Siebel” in view of Khyatti, Mohamed Reda (U.S. Publication No. 2025/0356218) hereinafter “Khyatti” in view of Delaflor, Manuel et al. "Reactin: The role of human feedback in reason-act prompting strategies with language models." (2022) hereinafter “Delaflor”. Claims 1 and 21: A computer-implemented method for Artificial Intelligence (AI) based provision of management consulting, the computer-implemented method comprising: dividing, by a processor, the provision of management consulting into a plurality of tasks, the plurality of tasks defined as a plurality of respective workflows; Siebel teaches in ¶ 0034, the method may further include using the one or more processors to perform a CRM engine function; Siebel teaches in 0111, as shown in the architecture 220 includes a number of functions supporting AI- based CRM. In this example, these functions are divided roughly into two groups of functions, namely AI-based CRM functions 222 and supporting or management functions 224. The AI-based CRM functions 222 represent or involve the use of trained machine learning models and other AI-based functionality to implement AI-based CRM. Siebel teaches in ¶ 0129, an input reader distributes those tasks to worker nodes to perform reduce functions; the output of each map task is partitioned into a group of key-value pairs for each reduce, where partitioned may be likened to dividing the provision; Siebel teaches in 0176, functions can be added, omitted, combined, further subdivided, replicated, or placed in any other suitable configuration in the architecture 220, modular services component 250, and machine learning platform system. providing, by the processor, the plurality of AI agents with characteristic reference data obtained from a plurality of data sources; Siebel teaches in ¶ 0256, external data such as news, financial, and social media information where external data may be likened to a plurality of data sources; providing, by the processor, a plurality of interfaces to the plurality of AI agents for enabling exchanges of data amongst the plurality of AI agents; Siebel teaches in ¶ 0360, Interfaces; Fig. 17A through 27 illustrate example user interfaces supporting AI-based CRM according to this disclosure. The user interfaces may, for example, be generated by or for the various architectures described above using one or more devices 200 of Fig. 2A, the architecture 220 of FIGURE 2B, the modular services component 250 of Fig. 2C, and/or the machine learning platform system 260 of Fig. 2D. In some cases, the user interfaces may be presented on one or more of the user devices 102a-102n of Fig. 1. However, the architectures may generate or be used with any other suitable user interfaces, and the user interfaces may be presented on any other suitable device(s) in any other suitable system(s). Also, multiple user interfaces are described below, and each user interface may be used individually or in combination with any other user interface(s) described below in any suitable combination. and generating, by the processor, a plurality of deliverables in a plurality of user-readable formats, the plurality of deliverables collectively constituting management consulting; Siebel teaches in ¶ 0426, various data sources provide data used by a number of AI-based CRM functions where these AI-based CRM functions includes revenue forecasting that can be used to accurately forecast revenue with machine learning to identify risks and opportunities, explain drivers, coach users how to address them, and help with financial planning and product forecasting can be used to integrate market data, retail forecasting capabilities, sales forecasting, and other AI functionalities in order to provide forward-looking views of the company’s supply and demand in order to plan operations and ensure that customer needs are met where revenue forecasting and product forecasting may be likened to a plurality of deliverables, and coach users and help with financial planning may be likened to collectively constituting management consulting. While Siebel teaches AI- based CRM functions, interfaces supporting AI-based CRM, external data, tasks partitioned into a group, a method of AI-based CRM using a model-driven architecture, and customer service agent, Siebel does not explicitly teach one or more artificial intelligence agents and chain of thought. However, Khyatti teaches the following: assigning, by the processor, a plurality of Artificial Intelligence (AI) agents to the plurality of tasks with each one of the plurality of tasks assigned at least one AI agent; Khyatti teaches in ¶ 0003, an artificial intelligence (AI) or machine learning model can enable language understanding and generation. A machine learning model can be directed to perform a variety of tasks by inputting a prompt into the machine learning model. The prompt may specify particular parameters for the machine learning model to follow in completing the task. wherein at least one of said AI agents is customized by a fine-tuning process comprising: identifying a logic model for implementation of an inner monologue, said logic model selected from the group consisting of Chain-of-Thought prompting and Tree-of-Thought prompting; Khyatti teaches in ¶ 0053, a machine learning model can be directed to perform a variety of tasks by inputting a prompt into the machine learning model. The prompt may specify particular parameters for the machine learning model to follow in completing the task. In some instances, the prompt may ask the machine learning model to show results step-by-step in a chain-of-thought (CoT), in which each step to be performed in executing the task is shown as an instruction by the machine learning model. implementing the inner monologue to the identified AI agent; Khyatti teaches in ¶ 0054, The described CoT meta-prompting systems and methods enable automatic and recursive generation of a series of sequential “agents” (configured to execute agents) as a chain-of-thought (“CoT”) for a given task using a machine learning model. and fine-tuning the inner monologue by distilling a step-by-step method using a custom-generated code segment to train said AI agent for a rationale generation task in addition to a label prediction task; Khyatti teaches in ¶ 0206, the particular machine learning model used, such as GPT, may be finely tuned with feedback for retraining with the present CoT meta-prompting system. The CoT builder may be used to create CoTs for many tasks. Examples of CoT's, such as two hundred examples of CoT's, may be stored and fed back to the machine learning model for retraining. A fine-tuned model may be created with a stored CoT dataset. The training may include providing a ‘prompt’ and a ‘completion’. teaches in ¶ 0207, the format of the CoT dataset may be similar to the format provided by GPT when using the CoT meta prompt, such as JSON format. For example, a prompt that recites, “Create instruction for the task X” may result in a JSON output representing the CoT. The updated model may be used to output an improved prompt for creating further CoT chains for a specific task. The fine-tuned machine learning model could provide better CoT JSON when prompted, e.g., with “Create instruction” as the task definition. The machine learning model can be repeatedly fine-tuned with new tasks with added operation type handlers on the backend. wherein the distilling includes a manual intervention stage that edits or approves intermediate rationale steps during fine-tuning using the custom-generated code segment; Khyatti teaches in ¶ 0208, the present CoT meta-prompting system facilitates automated task execution using machine learning models and encompasses many benefits over current systems. For example, the CoT meta-prompting system is not restricted to output of agent being the input of the direct next agent, but instead allows an output of any previous agent to be used as input at any agent. In addition, the CoT meta-prompting system introduces the use of data-exchange format which allows for LLMs to have a consistent and reliable function signature, eliminating a risk of unintentional output type changes breaking chains. Furthermore, the CoT meta-prompting system allows users to bypass the need to pre-create prompts, as the Chain-of-thought meta prompt outputs agents of the desired types. Each agent type may use a generic prompt for that specific agent, drastically reducing the amount of time users need to invest in the design of a CoT chain. In addition, the CoT meta-prompting system provides the CoT meta-prompt, which allows users to quickly break a task down into smaller subtasks and create a chain from those, making it easier for users to decompose a node into more nodes as the CoT meta-prompt will handle the prompting for the user. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a method includes curating CRM data by employing a type system of a model-driven architecture and selecting an AI CRM application from a group of applications of Siebel with a method, apparatus, and system for performing a task using a chain of thought model of Khyatti to assist businesses with implementing AI agent chain of thought prompting to complete a task (Khyatti, Spec. ¶ 0055). While Siebel teaches AI- based CRM functions, interfaces supporting AI-based CRM, external data, tasks partitioned into a group, a method of AI-based CRM using a model-driven architecture, and customer service agent, and Khyatti teaches one or more artificial intelligence agents and chain of thought; and although Siebel and Khyatti relate to Delaflor through providing feedback with artificial intelligence use, neither Siebel nor Khyatti explicitly teach Tree-of-Thought. However, Delaflor teaches the following: Tree-of-Thought prompting; Delaflor teaches in Related Work, ¶ 2, Tree of Thought prompting leverages hierarchical reasoning to break down complex prompts. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a method includes curating CRM data by employing a type system of a model-driven architecture and selecting an AI CRM application from a group of applications of Siebel and a method, apparatus, and system for performing a task using a chain of thought model of Khyatti with a framework designed to infuse human feedback into the intermediate prompting steps of large language models of Delaflor to assist businesses with implementing AI systems with tree of thought reasoning steps (Delaflor, System Description ¶ 2). Conclusion The prior art made of record and not relied upon is considered relevant but not applied: Note: these are additional references found but not used. - Reference Qadrud-Din et al. (U.S. Patent No. 11,995,411) discloses techniques and mechanisms described herein provide for the automated evaluation of text against criteria specified in natural language. - Reference Das, Subhodev et al. (U.S. Patent Publication No. 2023/0394413) discloses techniques for Artificial Intelligence (AI) models that can automatically generate diverse, explainable, interpretable, reactive, and coordinated behaviors for a team. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Frank Alston whose telephone number is 703-756-4510. The Examiner can normally be reached 9:00 AM – 5:00 PM Monday - Friday. Examiner can be reached via Fax at 571-483-7338. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor Beth Boswell can be reached at (571) 272-6737. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK MAURICE ALSTON/ Examiner, Art Unit 3625 12/23/2025 /BETH V BOSWELL/Supervisory Patent Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Feb 20, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103
Nov 21, 2025
Response Filed
Dec 23, 2025
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month