Prosecution Insights
Last updated: April 19, 2026
Application No. 19/000,302

Organizations as Dissipative Structures Utilizing Cooperative Games to Dynamically Align Value, Strategy and Operations within a Probabilistic Framework

Non-Final OA §101§103
Filed
Dec 23, 2024
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Value-Driven Strategic Consulting LLC
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is a Non-Final Office Action rejection on the merits. Claims 1-20 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 121 as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed parent applications, provisional application No. 63/613,269 filed on 12/21/2023, fail to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The specification of the instant application contains new matter in at least Paragraphs 0220, 0222, 0233, and 0466. Claims 1, 12, and 16 include limitations not supported in the parent applications, including dynamically performing interviews with stakeholders using common language models as part of a cooperative game to gather disparate stakeholder insights. Claims 2-11, 13-15, and 17-20 are directed to further describing how the questions are refined based on previous stakeholder responses which were not described in the parent provisional application No. 63/613,269 to which the instant application claim priority. Because the claims are not supported under 35 U.S.C. 112 first paragraph, the priority date for the claim limitations of the instant application will be the effective filing date of the instant application which is 12/23/2024. Information Disclosure Statement (IDS) The information disclosure statement(s) filed on 01/14/2026 comply with the provisions 37 CFR 1.97, 1.98, and MPEP 609 and is considered by the Examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitations are: a data input module via a user interface module used to receive project-specific user inputs for respective projects of a plurality of projects associated with an organization; and an evaluation module used to determine a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization in claims 16-18, and 20. Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to a method which is a statutory category. Step 2A, Prong One - Claim 1 recites: A method for organizational transformation from a current state to a target state, comprising: dynamically performing interviews with stakeholders as part of a cooperative game to gather disparate stakeholder insights; defining the target state, projects, milestones, tasks, and resource use/availability based on the gathered insights; modeling the organization as a dissipative system to calculate an organizational entropy score; identifying possible task completion pathways between the current state and the target state; identifying an optimal project completion path to determine the magnitude of contribution to organizational transformation towards the target state for each project; assessing the likelihood of successful project completion for each project; generating project completion resource allocation plans based on the optimal project completion path; and calculating based on performance measured using micro-behaviors analysis. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In this case, aligning stakeholder insights to a project is a social activity. If a claim limitation, under its broadest reasonable interpretation, covers managing interactions between people, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a Language Model; a Lookalike Model; a Markov Model; a Decision Tree such as a Fault Tree Model; and Bayesian Priors. The Language Model is merely used to dynamically perform interviews with stakeholders as part of a cooperative game to use disparate stakeholder insights to define the target state, projects, milestones, tasks, and resource use/availability. (Paragraph 0006). The Lookalike Model is merely used to categorize the organization into an organizational cohort of similar organizations for which proxy organizational profiles (Paragraph 0454). The Markov Model is merely used to identify possible task completion pathways between current and target state (Paragraph 0006). The Decision Tree such as a Fault Tree Model is merely used to identify magnitude of contribution to organizational transformation towards target state for each project and likelihood of successful project completion for each project (Paragraph 0006). The Bayesian Priors is merely used to represent prior beliefs or knowledge about a parameter, expressed as a probability distribution, which is updated with new data through the Bayesian inference process (Paragraph 0389). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “Language Model,” “Lookalike Model,” “Fault Tree Model,” and “Bayesian Priors” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Also, the Language Model is considered “field of use” since it’s just used to receive stakeholder insights for an analysis, but the model is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of aligning stakeholder insights to a project. The specification shows that the Language Model is merely used to dynamically perform interviews with stakeholders as part of a cooperative game to use disparate stakeholder insights to define the target state, projects, milestones, tasks, and resource use/availability. (Paragraph 0006). The Lookalike Model is merely used to categorize the organization into an organizational cohort of similar organizations for which proxy organizational profiles (Paragraph 0454). The Markov Model is merely used to identify possible task completion pathways between current and target state (Paragraph 0006). The Decision Tree such as a Fault Tree Model is merely used to identify magnitude of contribution to organizational transformation towards target state for each project and likelihood of successful project completion for each project (Paragraph 0006). The Bayesian Priors is merely used to represent prior beliefs or knowledge about a parameter, expressed as a probability distribution, which is updated with new data through the Bayesian inference process (Paragraph 0389). In this case, the claim does not provide any details about how the Language Model operates (see MPEP 2106.05(f), no description of how the plurality of questions is generated in order to gather stakeholder insights; and 2024 AI Guidance, example 47, claim 2). Also, the step of “performing an interview to gather stakeholder insights” is considered a well-understood, routine, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Lastly, the other models (e.g., Markov, Fault Tree, and Bayesian) are “well known” in the art (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent claim 12 is directed to a system at step 1, which is a statutory category. Claim 12 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 12 further recites: a user interface displayed on the one or more devices – which is merely used to receive response information to specific questions into response fields (Paragraphs 0449-0450). At Step 2A, Prong 2 - this is still considered “field of use” since it’s just used to provide an assessment and receive a response, but the user interface is not improved (MPEP 2106.05h). At Step 2B – this is considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent Claim 16 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 16 is directed to a method which is a statutory category. Step 2A, Prong One - Claim 16 recites: A method comprising: receiving, in response to one or more responsible user interviews, project-specific user inputs for respective projects of a plurality of projects associated with an organization, wherein the project-specific user inputs comprise estimated material costs associated with the respective project, current actual material costs associated with the respective project, estimated labor costs associated with the respective project, current actual labor costs expended during execution of the respective project, estimated project timeline for the respective project, a project start date for the respective project, and a current project progress metric associated with the respective project; determining, based at least upon the project-specific user inputs for the respective projects of the plurality of projects associated with the organization, a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization; determining, a current state for the respective projects of the plurality of projects associated with the organization; determining, using the evaluation module of the value attribution framework, a desired future state for the respective projects of the plurality of projects associated with the organization; predicting, based at least on the plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, a plurality of project-specific outputs, wherein respective project-specific outputs are associated with the respective projects of the plurality of projects associated with the organization; and providing an organizational output based upon the plurality of project-specific outputs. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In this case, providing an organization output based on actuals is a social activity. If a claim limitation, under its broadest reasonable interpretation, covers managing interactions between people, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 16 includes additional elements: a data input module; a user interface module, an evaluation module, and an analytical model. The data input module via a user interface module is merely used to receive project-specific user inputs for respective projects of a plurality of projects associated with an organization (Paragraphs 0496). The evaluation module is merely used to determine a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization (Paragraphs 0496). The analytical model is merely used to predict, based at least on the plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, a plurality of project-specific outputs, wherein respective project-specific outputs are associated with the respective projects of the plurality of projects associated with the organization (Paragraphs 0496). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “user interface,” “evaluation module,” and “analytical model” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of providing an organization output based on actuals. The specification shows that the data input module via a user interface module is merely used to receive project-specific user inputs for respective projects of a plurality of projects associated with an organization (Paragraphs 0496). The evaluation module is merely used to determine a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization (Paragraphs 0496). The analytical model is merely used to predict, based at least on the plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, a plurality of project-specific outputs, wherein respective project-specific outputs are associated with the respective projects of the plurality of projects associated with the organization (Paragraphs 0496). Also, the step of “generating an output based on actuals” is considered a well-understood, routine, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claims 2 and 7 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as: wherein the common language models are configured to adaptively refine interview questions based on stakeholder responses; and analyzing data collected from stakeholder interviews to identify patterns and insights relevant to organizational transformation. Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In this case, the claim does not provide any specific details about how the Language Model operates (see MPEP 2106.05(f), no description of how the plurality of questions is refined; and 2024 AI Guidance, example 47, claim 2). Further, the step of “adaptively refine interview questions” is considered a well-understood, routing, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claim 3-6 and 8-11 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as: wherein the Lookalike Models are further configured to simulate various organizational scenarios to predict potential outcomes; wherein the Decision Tree Models incorporate project value data to evaluate and compare project contributions to overall organizational transformation towards the target state; wherein the Fault Tree Models are used to identify and mitigate potential risks associated with project completion; wherein the Bayesian Priors are continuously recalibrated based on ongoing performance metrics and feedback; wherein the optimal project completion path through the Markov model is dynamically recalculated based on real-time data and changes in project variables; using organizational historic project data to inform the Markov model and improve the accuracy of task completion pathway predictions; wherein financial data of the organization is utilized to generate Bayesian Priors, enhancing the precision of resource allocation and project planning; and wherein Bayesian Priors are generated by integrating historical project performance data and financial metrics to predict future project outcomes and resource needs. In this case, the main functions are merely used to: collect data (e.g., project value data); analyze the data (e.g., simulate and evaluate project contributions to overall organizational transformation); and display certain results of the collection and analysis (e.g., provide optimal project completion path). Those are functions that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Also, the steps of “continuously recalibrated based on ongoing performance metrics and feedback” and “dynamically recalculated based on real-time data and changes in project variables) are considered a well-understood, routing, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claim 13-15 and 17-20 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as: calculating a current project-level entropy scores; receiving historical project-level data for historical projects associated with the organization; wherein the historical project-level data comprises initial estimates or actuals; and specifying the type of model used for the evaluation. In this case, the main functions are merely used to: collect data (e.g., historical data) and analyze the data (e.g., compare initial estimates to actuals). Those are functions that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Goldberg et al. (US 2025/0156153 A1), in view of Nikolaev et al. (US 8,626,698 B1), in further view of Prieto (US 2014/0180755 A1) and Takahashi et al. (US 2023/0259856 A1). Regarding claim 1, Goldberg et al. discloses a method for organizational transformation from a current state to a target state, comprising (Paragraph 0010, The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor; Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks; Examiner interprets “project goals and project functionality” as the “target state”): dynamically performing interviews with stakeholders using common language models as part of a cooperative game to gather disparate stakeholder insights (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; Paragraph 0045, At 505, the generated prompt is evaluated using a model evaluation framework. For example, the prompt generated at 503 is provided to a model evaluation framework to instruct the configured trained large language model to generate the requested specification for the new project task. The automatically generated artificial intelligence (AI) prompt provides the appropriate context for the trained large language model to create the new task according to the desired specification. Requests included in the project such as generating a title, description, acceptance criteria, and assignment group can be fulfilled by the trained large language model when provided with the appropriate project context and desired specification guidelines. In some embodiments, the evaluation framework provides the prompts as a sequence of prompts such as an initial system prompt followed by one or more additional prompts to refine the generated output; Examiner interprets “desired functionality” as the “stakeholder insights”); defining the target state, projects, milestones, tasks, and resource use/availability based on the gathered insights (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; Paragraph 0045, At 505, the generated prompt is evaluated using a model evaluation framework. For example, the prompt generated at 503 is provided to a model evaluation framework to instruct the configured trained large language model to generate the requested specification for the new project task. The automatically generated artificial intelligence (AI) prompt provides the appropriate context for the trained large language model to create the new task according to the desired specification. Requests included in the project such as generating a title, description, acceptance criteria, and assignment group can be fulfilled by the trained large language model when provided with the appropriate project context and desired specification guidelines. In some embodiments, the evaluation framework provides the prompts as a sequence of prompts such as an initial system prompt followed by one or more additional prompts to refine the generated output; Paragraph 0046, In some embodiments, as part of generating a new task, a resource estimate is provided. For example, an analytics action to predict the amount of work required or to be budgeted for the new task is determined. In some embodiments, the analytics action can be via the process of FIG. 6; Paragraph 0047, using the process of FIG. 6, an analytics module of a project enhancements and analytics module can automatically predict resource estimates for a development project managed via a computer program development project management service. The identified project can be a new project or an existing project and the predictions can be for the project, one or more tasks of the project, and/or another related component of the project; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project); modeling the organization as a dissipative system using Lookalike Models to calculate an organizational entropy score (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0032, An analytics model can be trained to predict resource metrics such as resource estimates and/or utilization including estimated amounts of work estimates; Paragraph 0048, At 601, an analytics machine learning model is trained. For example, a machine learning model is trained using analytics data including resource analytics data. The training data can include resource metrics data from projects as well as data related to project resources and resource constraints such as the number of developers required and the skills and skill levels of the developers. In various embodiments, the training data can be project data from the same customer such as past and current project data limited to only the same customer. In some embodiments, the training data is project data aggregated across multiple customers and can be aggregated based on customers with similar requirements. In various embodiments, the training data can be anonymized as part of preparing the data for use in training; Paragraph 0037, At 401, a new project transcript and/or project data is received. For example, project data describing a project is received. The received project data can include a transcript of a project meeting, such as a transcript of a team meeting describing and/or discussing project features, the project status, the project goals, etc.; As stated in Paragraph 0184 of Applicant’s specification, the entropy score may be alignment of actions and resources toward successfully achieving the target state. Therefore, based on broadest reasonable interpretation in light of the specification, Goldberg et al. discloses an entropy score since it allocates resources to a task or program in order to achieve the target goal. Also, Examiner interprets the “machine learning used for aggregating customers with similar requirements” as the “lookalike model” since it’s aggregating/clustering similar data); identifying possible task completion pathways between the current state and the target state … (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0021, computer program development project management service 111 includes a trained machine learning model to predict resource requirements or estimated resource usage for projects. The predicted estimates can be predicted in a unit of value such as number of hours, number of developers, required hardware and/or software, and/or another estimated unit such as an estimated unit of work; Paragraph 0028, Analytics module 207 can be used to predict resource requirements such as to estimate the amount of effort required to complete a project including each of the tasks of the project. Analytics module 207 can utilize model evaluation framework 209 and trained machine learning models 211 to perform the prediction. In some embodiments, analytics module 207 is further used to train one or more models of trained machine learning models 211. For example, using tracked resource analytics of existing projects, analytics module 207 can train a deep learning model to predict resource usage for projects including prediction on the amount of work (or effort) required to complete a project and/or tasks of the project); …; …; generating project completion resource allocation plans … (Paragraph 0028, Analytics module 207 can be used to predict resource requirements such as to estimate the amount of effort required to complete a project including each of the tasks of the project. Analytics module 207 can utilize model evaluation framework 209 and trained machine learning models 211 to perform the prediction. In some embodiments, analytics module 207 is further used to train one or more models of trained machine learning models 211. For example, using tracked resource analytics of existing projects, analytics module 207 can train a deep learning model to predict resource usage for projects including prediction on the amount of work (or effort) required to complete a project and/or tasks of the project; Paragraph 0046, In some embodiments, as part of generating a new task, a resource estimate is provided. For example, an analytics action to predict the amount of work required or to be budgeted for the new task is determined. In some embodiments, the analytics action can be via the process of FIG. 6; Paragraph 0047, using the process of FIG. 6, an analytics module of a project enhancements and analytics module can automatically predict resource estimates for a development project managed via a computer program development project management service. The identified project can be a new project or an existing project and the predictions can be for the project, one or more tasks of the project, and/or another related component of the project; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project); and calculating … performance measured using micro-behaviors analysis (Paragraph 0017, In some embodiments, based on the generated specification of the task, the task is automatically tracked using the computer program development project management software. For example, the automatically generated task specification is entered into the computer program development project management software for tracking the task, such as the progress of the task. In some embodiments, the tracking includes completion properties such as how long the task takes to complete, the number and skills required to complete the task, and/or other metrics and/or analytics of the task. In various embodiments, based on the tracked data gathered from tracking the task, predictions can be made on future tasks such as the estimated amount of resources such as time, developers, and/or hardware required to perform one or more steps of the task). Although Goldberg et al. discloses identifying possible task completion pathways (e.g., generating a new task according to the desired goal or specification), Goldberg et al. does not specifically disclose wherein the identification of the possible task completion pathways is performed using a Markov model. However, Nikolaev et al. discloses modeling the organization …to calculate an organizational entropy score (Historical project performance data maintained in a database of the prediction system 500 (step 110) can include data related to one or more completed projects. The data can be stored in data structures such as textual lists, XML documents, class objects (e.g., instances of C++ or Java classes), other data structures, or any combination thereof. Performance data related to a completed project can be divided into multiple categories. The data can include at least one of scope information, resource information, schedule information, cost information, profitability information, or criticality information, or any combination thereof; Once variables are selected and their relationships established, the prediction system 500 can use the historical project performance data to estimate a probability distribution corresponding to each variable (step 126). For example, if the budget of 100 projects was tracked and 20 of these projects were completed below budget, then the unconditional probability of the below-budget state for the budget variable 210 is 0.2; As stated in Paragraph 0455 of Applicant’s specification, the entropy score may be a measure of organizational efficiency. Therefore, based on broadest reasonable interpretation in light of the specification, Nikolaev et al. discloses an entropy score since the probability of the below-budget state is a measure of organizational efficiency). identifying possible task completion pathways between the current state and the target state using a Markov model (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); identifying an optimal project completion path through the Markov model using [a probabilistic graphical modeling] to determine the magnitude of contribution to organizational transformation towards the target state for each project (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); assessing the likelihood of successful project completion for each project using [a probabilistic graphical modeling] (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); generating project completion resource allocation plans based on the optimal project completion path (FIG. 3 shows another exemplary probabilistic graphical model 300 generated using flowchart 100 of FIG. 1 to predict the success of a project. The model 300 includes at least a scope variable 302 representing, for example, the scope and type of a task planned for a project (e.g., task complexity), a resource variable 304 representing, for example, a skill level of a human resource available to complete the project, a delivery time variable 306 representing, for example, a time limitation for completing the project, a finances variable 308 representing, for example, the cost allocated to complete the project and a non-labor resource variable 310 representing, for example, a non-labor related resource allocated for completing the project. These variables all have a causal effect on the project variable 312, which represents the likelihood of success of a project); and calculating … based on performance measured using micro-behaviors analysis (Column 5, lines 19-26, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying a task completion pathway according to the desired goal or specification) of the invention of Goldberg et al. to further specify wherein the identification of the possible task completion pathways is performed using a Markov model of the invention of Nikolaev et al. because doing so would allow the method to use a Markov rule-based approach to estimate the probability of success of a new or in-flight project based on a number of possible values (see Nikolaev et al., Column 5, lines 19-46). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Although the combination of Goldberg et al. and Nikolaev et al. discloses identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling), the combination of Goldberg et al. and Nikolaev et al. wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models). However, Prieto discloses modeling the organization as a dissipative system using Lookalike Models to calculate an organizational entropy score (Paragraph 0018, Having generated the program model, the risk analysis engine can execute one or more simulations using the program model to generate a program outcome. The program outcome can be considered to represent a quantified result or effect of the program, such as a status of the program after the simulation, an event, and a measure of a program objective against a simulation goal (e.g., the purpose of the simulation itself) or against a program objective (e.g., one or more goals or objectives of the `real-life` program); As stated in Paragraph 0184 of Applicant’s specification, the entropy score may be alignment of actions and resources toward successfully achieving the target state. Therefore, based on broadest reasonable interpretation in light of the specification, Prieto discloses an entropy score since it calculates a measure of a program objective against a simulation goal); … using Decision Tree Models to determine the magnitude of contribution to organizational transformation towards the target state for each project; assessing the likelihood of successful project completion for each project using Fault Tree Models (Paragraph 0119, reference outcome events 302 can represent events that cause damage to or the failure of a program. The events represented by the reference outcome events 302 can be `general` events that cause catastrophic, program-wide damage or failure, or can be events causing damage or failure of a particular type or for a particular reason. For each undesired outcome event, the risk analysis engine 101 can create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event. Reference program events can be used as inputs to fault trees of other reference program events, and as such, cascading risks of failure can be processed). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models) of the invention of Goldberg et al. and Nikolaev et al. to further specify wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models) of the invention of Prieto because doing so would allow the method to create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event (see Prieto, Paragraph 0119). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Although the combination of Goldberg et al., Nikolaev et al., and Prieto discloses identifying an optimal project completion path through the Markov model using Decision Tree Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling; see Prieto, Paragraph 0119, a fault tree and execute it to identify the current program attributes most likely to be affected by negative events), the combination of Goldberg et al., Nikolaev et al., and Prieto does not specifically disclose calculating Bayesian Priors based on performance measured using micro-behaviors analysis. However, Takahashi et al. discloses calculating Bayesian Priors based on performance measured using micro-behaviors analysis (Paragraph 0104, In the case of Bayesian statistics. P (A|B) can be defined by the following calculation formula. P (A|B) represents the a posteriori probability. The posteriori probability is a probability that an event A will occur under the condition that an event B will occur). Where P (A) represents the prior probability. The prior probability is a probability that the event A occurs before the event B occurs. The prior probability can be set subjectively by the user of this system. P(B|A) represents a likelihood. The likelihood is a probability that the event B will occur under the condition that the event A will occur (or if the event A is assumed to be true). P(B) represents a marginal likelihood. The marginal likelihood is a probability that the event B will occur before the event A. That is, the marginal likelihood is a probability that the event B becomes true among all events A and B. For example, information such as that the project does not reach the target can be adopted as the event A. and information such as that the project scale is large or that the experience value of the project manager is low can be adopted as the event B. According to Bayesian statistics, the probability of the event A can be changed based on the event B; Paragraph 0105, As described above, while it is difficult to correctly predict the success probability. Bayesian statistics that can be predicted while changing the argument (parameter) representing the correct answer from the performance pre-status value given to the performance influencing pre-element is compatible with the machine learning. Thus, using a combination of the machine learning and Bayesian statistics is more useful for predicting the success probability than using the machine learning alone). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Decision Tree Models) of the invention of Goldberg et al., Nikolaev et al., and Prieto to further specify wherein the performance used to identify an optimal project completion path is calculated using Bayesian Priors of the invention of Takahashi et al. because doing so would allow the method to correctly predict the success probability by using a posteriori probability (see Takahashi et al., Paragraphs 0104-0105). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 2, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Goldberg et al. further discloses wherein the common language models are configured to adaptively refine interview questions based on stakeholder responses (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; Paragraph 0045, At 505, the generated prompt is evaluated using a model evaluation framework. For example, the prompt generated at 503 is provided to a model evaluation framework to instruct the configured trained large language model to generate the requested specification for the new project task. The automatically generated artificial intelligence (AI) prompt provides the appropriate context for the trained large language model to create the new task according to the desired specification. Requests included in the project such as generating a title, description, acceptance criteria, and assignment group can be fulfilled by the trained large language model when provided with the appropriate project context and desired specification guidelines. In some embodiments, the evaluation framework provides the prompts as a sequence of prompts such as an initial system prompt followed by one or more additional prompts to refine the generated output; Examiner interprets “sequence of prompts to refine the generated output” as “adaptively refine review questions”). Regarding claim 3, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although Goldberg et al. discloses using a machine learning model for alignment of actions and resources toward successfully achieving the target state (e.g., allocating resources to a task or program in order to achieve the target goal), the combination of Goldberg et al., Nikolaev et al., and Takahashi et al. does not specifically disclose wherein the model is configured to simulate various organizational scenarios to predict potential outcomes. However, Prieto discloses wherein the Lookalike Models are further configured to simulate various organizational scenarios to predict potential outcomes (Paragraph 0018, Having generated the program model, the risk analysis engine can execute one or more simulations using the program model to generate a program outcome. The program outcome can be considered to represent a quantified result or effect of the program, such as a status of the program after the simulation, an event, and a measure of a program objective against a simulation goal (e.g., the purpose of the simulation itself) or against a program objective (e.g., one or more goals or objectives of the `real-life` program); Examiner interprets the “program model” as the “lookalike model”). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models) of the invention of Goldberg et al. and Nikolaev et al. to further specify wherein the optimal project completion path is identified through the Markov model using Decision Tree Models and simulations (e.g., Fault Tree Models) of the invention of Prieto because doing so would allow the method to create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event (see Prieto, Paragraph 0119). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 4, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although the combination of Goldberg et al. and Nikolaev et al. discloses identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling), the combination of Goldberg et al. and Nikolaev et al. wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models). However, Prieto discloses wherein the Decision Tree Models incorporate project value data to evaluate and compare project contributions to overall organizational transformation towards the target state (Paragraph 0018, Having generated the program model, the risk analysis engine can execute one or more simulations using the program model to generate a program outcome. The program outcome can be considered to represent a quantified result or effect of the program, such as a status of the program after the simulation, an event, and a measure of a program objective against a simulation goal (e.g., the purpose of the simulation itself) or against a program objective (e.g., one or more goals or objectives of the `real-life` program; Paragraph 0119, reference outcome events 302 can represent events that cause damage to or the failure of a program. The events represented by the reference outcome events 302 can be `general` events that cause catastrophic, program-wide damage or failure, or can be events causing damage or failure of a particular type or for a particular reason. For each undesired outcome event, the risk analysis engine 101 can create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event. Reference program events can be used as inputs to fault trees of other reference program events, and as such, cascading risks of failure can be processed). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models) of the invention of Goldberg et al. and Nikolaev et al. to further specify wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models) of the invention of Prieto because doing so would allow the method to create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event (see Prieto, Paragraph 0119). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 5, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although the combination of Goldberg et al. and Nikolaev et al. discloses identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling), the combination of Goldberg et al. and Nikolaev et al. wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models). However, Prieto discloses wherein the Fault Tree Models are used to identify and mitigate potential risks associated with project completion (Paragraph 0119, reference outcome events 302 can represent events that cause damage to or the failure of a program. The events represented by the reference outcome events 302 can be `general` events that cause catastrophic, program-wide damage or failure, or can be events causing damage or failure of a particular type or for a particular reason. For each undesired outcome event, the risk analysis engine 101 can create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event. Reference program events can be used as inputs to fault trees of other reference program events, and as such, cascading risks of failure can be processed). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models) of the invention of Goldberg et al. and Nikolaev et al. to further specify wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models) of the invention of Prieto because doing so would allow the method to create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event (see Prieto, Paragraph 0119). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 6, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although the combination of Goldberg et al., Nikolaev et al., and Prieto discloses identifying an optimal project completion path through the Markov model using Decision Tree Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling; see Prieto, Paragraph 0119, a fault tree and execute it to identify the current program attributes most likely to be affected by negative events), the combination of Goldberg et al., Nikolaev et al., and Prieto does not specifically disclose calculating Bayesian Priors based on performance measured using micro-behaviors analysis. However, Takahashi et al. discloses wherein the Bayesian Priors are continuously recalibrated based on ongoing performance metrics and feedback (Paragraph 0104, In the case of Bayesian statistics. P (A|B) can be defined by the following calculation formula. P (A|B) represents the a posteriori probability. The posteriori probability is a probability that an event A will occur under the condition that an event B will occur). Where P (A) represents the prior probability. The prior probability is a probability that the event A occurs before the event B occurs. The prior probability can be set subjectively by the user of this system. P(B|A) represents a likelihood. The likelihood is a probability that the event B will occur under the condition that the event A will occur (or if the event A is assumed to be true). P(B) represents a marginal likelihood. The marginal likelihood is a probability that the event B will occur before the event A. That is, the marginal likelihood is a probability that the event B becomes true among all events A and B. For example, information such as that the project does not reach the target can be adopted as the event A. and information such as that the project scale is large or that the experience value of the project manager is low can be adopted as the event B. According to Bayesian statistics, the probability of the event A can be changed based on the event B; Paragraph 0105, As described above, while it is difficult to correctly predict the success probability. Bayesian statistics that can be predicted while changing the argument (parameter) representing the correct answer from the performance pre-status value given to the performance influencing pre-element is compatible with the machine learning. Thus, using a combination of the machine learning and Bayesian statistics is more useful for predicting the success probability than using the machine learning alone; Paragraph 0106, It can also be configured to correct the probability distribution derivation function by feedback of the probability distribution derivation function and deep learning of artificial intelligence, etc., not to correct the probability distribution derivation function used initially when the project is completely finished, but to correct the probability distribution derivation function by feedback of the performance progress of the project in the middle according to the progress of the project). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Decision Tree Models) of the invention of Goldberg et al., Nikolaev et al., and Prieto to further specify wherein the performance used to identify an optimal project completion path is calculated using Bayesian Priors of the invention of Takahashi et al. because doing so would allow the method to correctly predict the success probability by using a posteriori probability (see Takahashi et al., Paragraphs 0104-0105). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 7, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Goldberg et al. further discloses analyzing data collected from stakeholder interviews to identify patterns and insights relevant to organizational transformation (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; Paragraph 0039, a project task specification is automatically generated using a portion of the project transcript and/or project data received at 401 and potentially additional project context data retrieved from the computer program development project management service. For example, the project data received at 401 can be analyzed to extract relevant portions such as the portions that pertain to a specific feature or defect. In some embodiments, the selected portion of the project data can be determined by analyzing the data for trigger words or phrases. For example, certain trigger words or phrases can indicate the start of a new feature, defect, or another enhancement. In some embodiments, the different tasks are differentiated by different trigger words (or phrases) identified in the transcript and/or related project context. For example, the trigger words “enhancement,” “suggestion,” “idea,” “improvement,” “request,” and/or “feature request” can be designated to generate and track a new task for the project to implement the associated feature. Other similar trigger words or phrases for a new feature may include: “enhancement suggestion,” “innovation,” “fine-tuning,” and optimization.” As another example, the trigger words “defect,” “problem,” “malfunction,” “doesn't work,” and/or “missing” can be designated to signify that a task to resolve a defect associated with a project feature should be generated and tracked. Other similar trigger words or phrases for a new defect task may include: “issue,” “inconsistency,” “blemish,” “imperfection,” “not as designed,” “not working as expected,” “working not as expected,” “unexpected behavior,” and/or “bug.” The trigger words may be combined or used in a sentence such as “We can address it as an enhancement,” and “This is a great idea, we will consider it for future releases” for enhancements and “This feature doesn't work as expected” or “I think there is a bug because . . . ” for defects. Fewer or additional trigger words can be supported and/or automatically added. In various embodiments, the trigger words and phrases are used to select the relevant portions of the project data for a task and to remove non-relevant portions; Examiner interprets “desired functionality” as the “insights relevant to organizational transformation”). Regarding claim 8, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although Goldberg et al. discloses identifying possible task completion pathways (e.g., generating a new task according to the desired goal or specification), Goldberg et al. does not specifically disclose wherein the identification of the possible task completion pathways is performed using a Markov model. However, Nikolaev et al. discloses wherein the optimal project completion path through the Markov model is dynamically recalculated based on real-time data and changes in project variables (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach; Column 6, lines 19-30, The project variable 312 can in turn influence the state of one or more other variables, such as a delivered scope variable 314, a spend finances variable 316 and a used resources variable 318. The delivered scope variable 314 can indicate the scope of the project delivered and can assume one of three states--partial scope, full scope or extended scope. The spend finances variable 316 can indicate the amount of finances spent on the project and can assume three states--under budget, on budget or over budget. The used resources variable 318 can indicate the amount of resources consumed by the project and can assume three states--partial utilization, full utilization or over utilization). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying a task completion pathway according to the desired goal or specification) of the invention of Goldberg et al. to further specify wherein the identification of the possible task completion pathways is performed using a Markov model of the invention of Nikolaev et al. because doing so would allow the method to use a Markov rule-based approach to estimate the probability of success of a new or in-flight project based on a number of possible values (see Nikolaev et al., Column 5, lines 19-46). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 9, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although Goldberg et al. discloses identifying possible task completion pathways (e.g., generating a new task according to the desired goal or specification), Goldberg et al. does not specifically disclose wherein the identification of the possible task completion pathways is performed using a Markov model. However, Nikolaev et al. discloses using organizational historic project data to inform the Markov model and improve the accuracy of task completion pathway predictions (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach; Column 6, lines 31-48, The selection of the variables and determination of the causal relationships among the variables can be accomplished by experts relying on their institutional knowledge and/or by the prediction system 500 based on the historical project performance data (step 110). In some embodiments, a few variables are initially selected for inclusion in the model and, depending on how well they predict project success, one or more variables and/or dependencies can be altered, added or removed. This iterative process can be repeated over time to fine tune the model structure, such as whenever the historical performance data is updated). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying a task completion pathway according to the desired goal or specification) of the invention of Goldberg et al. to further specify wherein the identification of the possible task completion pathways is performed using a Markov model of the invention of Nikolaev et al. because doing so would allow the method to use a Markov rule-based approach to estimate the probability of success of a new or in-flight project based on a number of possible values (see Nikolaev et al., Column 5, lines 19-46). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 10, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although the combination of Goldberg et al., Nikolaev et al., and Prieto discloses identifying an optimal project completion path through the Markov model using Decision Tree Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling; see Prieto, Paragraph 0119, a fault tree and execute it to identify the current program attributes most likely to be affected by negative events), the combination of Goldberg et al., Nikolaev et al., and Prieto does not specifically disclose calculating Bayesian Priors based on performance measured using micro-behaviors analysis. However, Takahashi et al. discloses wherein financial data of the organization is utilized to generate Bayesian Priors, enhancing the precision of resource allocation and project planning (Paragraph 0078, Future prediction of the procurement cost of necessary materials is also important. For example, there are cases in which a trade war occurred and tariffs were imposed more than twice as much as before. In this case, the procurement cost of necessary materials exceeds the usual expectation, and it gives a bad influence on the project success as the performance influencing pre-element; Paragraph 0104, In the case of Bayesian statistics. P (A|B) can be defined by the following calculation formula. P (A|B) represents the a posteriori probability. The posteriori probability is a probability that an event A will occur under the condition that an event B will occur). Where P (A) represents the prior probability. The prior probability is a probability that the event A occurs before the event B occurs. The prior probability can be set subjectively by the user of this system. P(B|A) represents a likelihood. The likelihood is a probability that the event B will occur under the condition that the event A will occur (or if the event A is assumed to be true). P(B) represents a marginal likelihood. The marginal likelihood is a probability that the event B will occur before the event A. That is, the marginal likelihood is a probability that the event B becomes true among all events A and B. For example, information such as that the project does not reach the target can be adopted as the event A. and information such as that the project scale is large or that the experience value of the project manager is low can be adopted as the event B. According to Bayesian statistics, the probability of the event A can be changed based on the event B; Paragraph 0105, As described above, while it is difficult to correctly predict the success probability. Bayesian statistics that can be predicted while changing the argument (parameter) representing the correct answer from the performance pre-status value given to the performance influencing pre-element is compatible with the machine learning. Thus, using a combination of the machine learning and Bayesian statistics is more useful for predicting the success probability than using the machine learning alone). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Decision Tree Models) of the invention of Goldberg et al., Nikolaev et al., and Prieto to further specify wherein the performance used to identify an optimal project completion path is calculated using Bayesian Priors of the invention of Takahashi et al. because doing so would allow the method to correctly predict the success probability by using a posteriori probability (see Takahashi et al., Paragraphs 0104-0105). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 11, which is dependent of claim 1, the combination of Goldberg et al., Nikolaev et al., Prieto, and Takahashi et al. discloses all the limitations in claim 1. Although the combination of Goldberg et al., Nikolaev et al., and Prieto discloses identifying an optimal project completion path through the Markov model using Decision Tree Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling; see Prieto, Paragraph 0119, a fault tree and execute it to identify the current program attributes most likely to be affected by negative events), the combination of Goldberg et al., Nikolaev et al., and Prieto does not specifically disclose calculating Bayesian Priors based on performance measured using micro-behaviors analysis. However, Takahashi et al. discloses wherein Bayesian Priors are generated by integrating historical project performance data and financial metrics to predict future project outcomes and resource needs (Paragraph 0056, An “inputable labor cost” is obtained by the system by comparing a labor cost of the current project with a labor cost that has been appropriately input in the appropriate personnel based on past results for project of similar scale. For example, for a similar project in the past, an appropriate labor cost for the case where similar human resources were invested is held, and by inputting the labor cost of the present project, the labor cost of the present project and the labor cost of the past project are compared with each other and the value as the performance influencing pre-element is acquired; Paragraph 0104, In the case of Bayesian statistics. P (A|B) can be defined by the following calculation formula. P (A|B) represents the a posteriori probability. The posteriori probability is a probability that an event A will occur under the condition that an event B will occur). Where P (A) represents the prior probability. The prior probability is a probability that the event A occurs before the event B occurs. The prior probability can be set subjectively by the user of this system. P(B|A) represents a likelihood. The likelihood is a probability that the event B will occur under the condition that the event A will occur (or if the event A is assumed to be true). P(B) represents a marginal likelihood. The marginal likelihood is a probability that the event B will occur before the event A. That is, the marginal likelihood is a probability that the event B becomes true among all events A and B. For example, information such as that the project does not reach the target can be adopted as the event A. and information such as that the project scale is large or that the experience value of the project manager is low can be adopted as the event B. According to Bayesian statistics, the probability of the event A can be changed based on the event B; Paragraph 0105, As described above, while it is difficult to correctly predict the success probability. Bayesian statistics that can be predicted while changing the argument (parameter) representing the correct answer from the performance pre-status value given to the performance influencing pre-element is compatible with the machine learning. Thus, using a combination of the machine learning and Bayesian statistics is more useful for predicting the success probability than using the machine learning alone). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Decision Tree Models) of the invention of Goldberg et al., Nikolaev et al., and Prieto to further specify wherein the performance used to identify an optimal project completion path is calculated using Bayesian Priors of the invention of Takahashi et al. because doing so would allow the method to correctly predict the success probability by using a posteriori probability (see Takahashi et al., Paragraphs 0104-0105). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Goldberg et al. (US 2025/0156153 A1), in view of Nikolaev et al. (US 8,626,698 B1), in further view of Prieto (US 2014/0180755 A1). Regarding claim 12, Goldberg et al. discloses a method comprising: receiving, from one or more user devices, in response to one or more user interactions with a user interface displayed on the one or more user devices, information about a vision and a mission statement for an organization (Paragraph 0010, The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor; Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks; Paragraph 0014, In some embodiments, a transcript of at least a portion of a discussion associated with a computer program development is received. For example, a video conference between a development team discussing a computer program development project is recorded and a transcript of the video conference is generated. In various embodiments, the received transcript can be generated manually and/or automatically, such as by applying natural language processing (NPL) techniques to convert at least the audio portion of the video recording to a transcript. In some embodiments, the transcript includes transcribed descriptions of video elements of the recorded video conference, such as gestures or actions performed by the video conference participants including actions performed on a computer user interface such as while demonstrating a feature of the computer program. Other visual actions include user interface interactions such as zooming in on a user interface, clicking on a button, opening a dialog box, typing a description or name of an item, clicking, scrolling, hovering, closing a user interface element, and reloading a user interface element, among others. In various embodiments, the transcript can be generated in real-time, such as during a video conference, and/or the transcript can be based on an audio recording; Examiner interprets the “transcript of a team meeting describing project goals and functionality” as the “information about a vision and a mission statement for an organization”); causing the one or more user devices to present, via the user interface, one or more dynamic interviews with one or more users associated with the one or more user devices, the one or more dynamic interviews comprising a plurality of questions generated using a common language model (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; Paragraph 0014, In some embodiments, a transcript of at least a portion of a discussion associated with a computer program development is received. For example, a video conference between a development team discussing a computer program development project is recorded and a transcript of the video conference is generated. In various embodiments, the received transcript can be generated manually and/or automatically, such as by applying natural language processing (NPL) techniques to convert at least the audio portion of the video recording to a transcript. In some embodiments, the transcript includes transcribed descriptions of video elements of the recorded video conference, such as gestures or actions performed by the video conference participants including actions performed on a computer user interface such as while demonstrating a feature of the computer program. Other visual actions include user interface interactions such as zooming in on a user interface, clicking on a button, opening a dialog box, typing a description or name of an item, clicking, scrolling, hovering, closing a user interface element, and reloading a user interface element, among others. In various embodiments, the transcript can be generated in real-time, such as during a video conference, and/or the transcript can be based on an audio recording; Paragraph 0045, At 505, the generated prompt is evaluated using a model evaluation framework. For example, the prompt generated at 503 is provided to a model evaluation framework to instruct the configured trained large language model to generate the requested specification for the new project task. The automatically generated artificial intelligence (AI) prompt provides the appropriate context for the trained large language model to create the new task according to the desired specification. Requests included in the project such as generating a title, description, acceptance criteria, and assignment group can be fulfilled by the trained large language model when provided with the appropriate project context and desired specification guidelines. In some embodiments, the evaluation framework provides the prompts as a sequence of prompts such as an initial system prompt followed by one or more additional prompts to refine the generated output); receiving, from the one or more user devices, user responses from the one or more dynamic interviews (Paragraph 0014, Other visual actions include user interface interactions such as zooming in on a user interface, clicking on a button, opening a dialog box, typing a description or name of an item, clicking, scrolling, hovering, closing a user interface element, and reloading a user interface element, among others; Paragraph 0045, At 505, the generated prompt is evaluated using a model evaluation framework. For example, the prompt generated at 503 is provided to a model evaluation framework to instruct the configured trained large language model to generate the requested specification for the new project task. The automatically generated artificial intelligence (AI) prompt provides the appropriate context for the trained large language model to create the new task according to the desired specification. Requests included in the project such as generating a title, description, acceptance criteria, and assignment group can be fulfilled by the trained large language model when provided with the appropriate project context and desired specification guidelines. In some embodiments, the evaluation framework provides the prompts as a sequence of prompts such as an initial system prompt followed by one or more additional prompts to refine the generated output); generating one or more Lookalike models associated with the organization based on the user responses from the one or more dynamic interviews; defining, based at least on the one or more Lookalike models, a current organizational state, a target organizational state, and a plurality of projects, wherein the plurality of projects include projects for which the completion of the project will contribute to a transformation of the organization from the current organizational state towards the target organizational state (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0032, An analytics model can be trained to predict resource metrics such as resource estimates and/or utilization including estimated amounts of work estimates; Paragraph 0048, At 601, an analytics machine learning model is trained. For example, a machine learning model is trained using analytics data including resource analytics data. The training data can include resource metrics data from projects as well as data related to project resources and resource constraints such as the number of developers required and the skills and skill levels of the developers. In various embodiments, the training data can be project data from the same customer such as past and current project data limited to only the same customer. In some embodiments, the training data is project data aggregated across multiple customers and can be aggregated based on customers with similar requirements. In various embodiments, the training data can be anonymized as part of preparing the data for use in training; Paragraph 0037, At 401, a new project transcript and/or project data is received. For example, project data describing a project is received. The received project data can include a transcript of a project meeting, such as a transcript of a team meeting describing and/or discussing project features, the project status, the project goals, etc.; Examiner interprets the “machine learning used for aggregating customers with similar requirements” as the “lookalike model” since it’s aggregating/clustering similar data); creating … a plurality of possible project completion pathways between the current organizational state and the target organizational state (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0021, computer program development project management service 111 includes a trained machine learning model to predict resource requirements or estimated resource usage for projects. The predicted estimates can be predicted in a unit of value such as number of hours, number of developers, required hardware and/or software, and/or another estimated unit such as an estimated unit of work; Paragraph 0028, Analytics module 207 can be used to predict resource requirements such as to estimate the amount of effort required to complete a project including each of the tasks of the project. Analytics module 207 can utilize model evaluation framework 209 and trained machine learning models 211 to perform the prediction. In some embodiments, analytics module 207 is further used to train one or more models of trained machine learning models 211. For example, using tracked resource analytics of existing projects, analytics module 207 can train a deep learning model to predict resource usage for projects including prediction on the amount of work (or effort) required to complete a project and/or tasks of the project); determining a probability of project completion or success for each project along each of the plurality of possible project completion pathways … (Paragraph 0028, Analytics module 207 can be used to predict resource requirements such as to estimate the amount of effort required to complete a project including each of the tasks of the project. Analytics module 207 can utilize model evaluation framework 209 and trained machine learning models 211 to perform the prediction. In some embodiments, analytics module 207 is further used to train one or more models of trained machine learning models 211. For example, using tracked resource analytics of existing projects, analytics module 207 can train a deep learning model to predict resource usage for projects including prediction on the amount of work (or effort) required to complete a project and/or tasks of the project; Paragraph 0046, In some embodiments, as part of generating a new task, a resource estimate is provided. For example, an analytics action to predict the amount of work required or to be budgeted for the new task is determined. In some embodiments, the analytics action can be via the process of FIG. 6; Paragraph 0047, using the process of FIG. 6, an analytics module of a project enhancements and analytics module can automatically predict resource estimates for a development project managed via a computer program development project management service. The identified project can be a new project or an existing project and the predictions can be for the project, one or more tasks of the project, and/or another related component of the project; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project); …; and determining, based on the probabilities of project completion or success …, and further based on the magnitudes of contribution of project completion or success to the transformation of the organization towards the target organizational state …, an optimal project completion pathways … (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0028, Analytics module 207 can be used to predict resource requirements such as to estimate the amount of effort required to complete a project including each of the tasks of the project. Analytics module 207 can utilize model evaluation framework 209 and trained machine learning models 211 to perform the prediction. In some embodiments, analytics module 207 is further used to train one or more models of trained machine learning models 211. For example, using tracked resource analytics of existing projects, analytics module 207 can train a deep learning model to predict resource usage for projects including prediction on the amount of work (or effort) required to complete a project and/or tasks of the project; Paragraph 0046, In some embodiments, as part of generating a new task, a resource estimate is provided. For example, an analytics action to predict the amount of work required or to be budgeted for the new task is determined. In some embodiments, the analytics action can be via the process of FIG. 6; Paragraph 0047, using the process of FIG. 6, an analytics module of a project enhancements and analytics module can automatically predict resource estimates for a development project managed via a computer program development project management service. The identified project can be a new project or an existing project and the predictions can be for the project, one or more tasks of the project, and/or another related component of the project). Although Goldberg et al. discloses identifying possible task completion pathways (e.g., generating a new task according to the desired goal or specification), Goldberg et al. does not specifically disclose wherein the identification of the possible task completion pathways is performed using a Markov model. However, Nikolaev et al. discloses creating a Markov model including a plurality of possible project completion pathways between the current organizational state and the target organizational state (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); determining a probability of project completion or success for each project along each of the plurality of possible project completion pathways within the Markov model using one or more [probabilistic graphical modeling] (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); determining, for each project along each of the plurality of possible project completion pathways within the Markov model, using one or more [probabilistic graphical modeling], a magnitude of contribution of project completion or success to the transformation of the organization from the current organizational state towards the target organizational state (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach); and determining, based on the probabilities of project completion or success determined using the one or more [probabilistic graphical modeling], and further based on the magnitudes of contribution of project completion or success to the transformation of the organization towards the target organizational state determined using the one or more [probabilistic graphical modeling], an optimal project completion pathways through within the Markov model from among the plurality of possible project completion pathways within the Markov model (Column 5, lines 19-46, The prediction system 500 can store the historical performance information related to completed projects in one or more databases. The performance information can be gathered over a period time from various groups in a corporation. The prediction system 500 can use such historical performance information to construct a model (step 120) of the data domain, which can be used to estimate the probability of success of a new or in-flight project. In some embodiments, the prediction system 500 develops the model (step 120) using a probabilistic graphical modeling (PGM) approach by inferring, from the data collected and/or inputs from experts, certain variables and the relationships among the variables. These variables and their relationships are determined to achieve certain goals, such to predict the success of a project. For example, the prediction system 500 can generate a Bayesian network that includes a set of interconnected nodes, where each node represents a random variable in the model and the connecting arcs of the network represent causal relationships among the variables. Each node can assume one of a number of possible values to indicate a particular state of the variable. The probability that a certain state of a node occurs is determined from the probabilities associated with states of one or more nodes connected to the current node. Even though a Bayesian network is used to illustrate the principles of the present invention, other PGM modeling approaches are equally usable within the scope of the present invention, such as a Markov rule-based approach, neural network approach or genetic approach). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying a task completion pathway according to the desired goal or specification) of the invention of Goldberg et al. to further specify wherein the identification of the possible task completion pathways is performed using a Markov model of the invention of Nikolaev et al. because doing so would allow the method to use a Markov rule-based approach to estimate the probability of success of a new or in-flight project based on a number of possible values (see Nikolaev et al., Column 5, lines 19-46). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Although the combination of Goldberg et al. and Nikolaev et al. discloses identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models to determine the magnitude of contribution to organizational transformation towards the target state for each project (see Nikolaev et al., Column 5, lines 19-46, estimate the probability of success of a new or in-flight project through the Markov model using a probabilistic graphical modeling), the combination of Goldberg et al. and Nikolaev et al. wherein the optimal project completion path is identified through the Markov model using Decision Tree Models (e.g., Fault Tree Models). However, Prieto discloses determining a probability of project completion or success for each project along each of the plurality of possible project completion pathways within the [simulation] model using one or more fault tree models; determining, for each project along each of the plurality of possible project completion pathways within the [simulation] model, using one or more decision tree models, a magnitude of contribution of project completion or success to the transformation of the organization from the current organizational state towards the target organizational state; and determining, based on the probabilities of project completion or success determined using the one or more fault tree models, and further based on the magnitudes of contribution of project completion or success to the transformation of the organization towards the target organizational state determined using the one or more decision tree models, an optimal project completion pathways through within the [simulation] model from among the plurality of possible project completion pathways within the [simulation] model (Paragraph 0018, Having generated the program model, the risk analysis engine can execute one or more simulations using the program model to generate a program outcome. The program outcome can be considered to represent a quantified result or effect of the program, such as a status of the program after the simulation, an event, and a measure of a program objective against a simulation goal (e.g., the purpose of the simulation itself) or against a program objective (e.g., one or more goals or objectives of the `real-life` program); Paragraph 0119, reference outcome events 302 can represent events that cause damage to or the failure of a program. The events represented by the reference outcome events 302 can be `general` events that cause catastrophic, program-wide damage or failure, or can be events causing damage or failure of a particular type or for a particular reason. For each undesired outcome event, the risk analysis engine 101 can create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event. Reference program events can be used as inputs to fault trees of other reference program events, and as such, cascading risks of failure can be processed). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying an optimal project completion path through the Markov model using Probabilistic Graphical Models) of the invention of Goldberg et al. and Nikolaev et al. to further specify wherein the optimal project completion path is identified through the Markov model using Decision Tree Models and (e.g., Fault Tree Models) of the invention of Prieto because doing so would allow the method to create a fault tree and execute it to identify the current program attributes most likely to be affected by negative events, or most likely to contribute to the program's degradation or failure as set forth in the negative program event (see Prieto, Paragraph 0119). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 13, which is dependent of claim 12, the combination of Goldberg et al., Nikolaev et al., and Prieto discloses all the limitations in claim 12. Goldberg et al. further discloses calculating, based on a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, current project-level entropy scores for the respective projects of the plurality of projects associated with the organization (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0032, An analytics model can be trained to predict resource metrics such as resource estimates and/or utilization including estimated amounts of work estimates; Paragraph 0048, At 601, an analytics machine learning model is trained. For example, a machine learning model is trained using analytics data including resource analytics data. The training data can include resource metrics data from projects as well as data related to project resources and resource constraints such as the number of developers required and the skills and skill levels of the developers. In various embodiments, the training data can be project data from the same customer such as past and current project data limited to only the same customer. In some embodiments, the training data is project data aggregated across multiple customers and can be aggregated based on customers with similar requirements. In various embodiments, the training data can be anonymized as part of preparing the data for use in training; Paragraph 0037, At 401, a new project transcript and/or project data is received. For example, project data describing a project is received. The received project data can include a transcript of a project meeting, such as a transcript of a team meeting describing and/or discussing project features, the project status, the project goals, etc.; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project; As stated in Paragraph 0184 of Applicant’s specification, the entropy score may be alignment of actions and resources toward successfully achieving the target state. Therefore, based on broadest reasonable interpretation in light of the specification, Goldberg et al. discloses an entropy score since it calculates project deviations to target goals). Regarding claim 14, which is dependent of claim 12, the combination of Goldberg et al., Nikolaev et al., and Prieto discloses all the limitations in claim 12. Goldberg et al. further discloses receiving historical project-level data for historical projects associated with the organization (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required). Regarding claim 15, which is dependent of claim 12, the combination of Goldberg et al., Nikolaev et al., and Prieto discloses all the limitations in claim 12. Goldberg et al. further discloses wherein the historical project-level data comprises one or more of: initially estimated material costs associated with respective historical projects, actual material costs associated with the respective historical projects, initially estimated labor costs associated with the respective historical projects, actual labor costs expended during execution of the respective historical projects, initially estimated project timeline for the respective historical projects, an actual project start date for the respective historical projects, or an actual project end date for the respective historical projects (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required; It can be noted that the claim language is written in alternative form. The limitation taught by Goldberg et al. is based on “initially estimated project timeline for the respective historical projects"). Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goldberg et al. (US 2025/0156153 A1), in view of Nikolaev et al. (US 8,626,698 B1). Regarding claim 16, Goldberg et al. discloses a method comprising: receiving, at a data input module of a value attribution framework, in response to one or more responsible user interviews conducted with a user interface module of the value attribution framework, project-specific user inputs for respective projects of a plurality of projects associated with an organization (Paragraph 0010, The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor; Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks; Paragraph 0014, In some embodiments, a transcript of at least a portion of a discussion associated with a computer program development is received. For example, a video conference between a development team discussing a computer program development project is recorded and a transcript of the video conference is generated. In various embodiments, the received transcript can be generated manually and/or automatically, such as by applying natural language processing (NPL) techniques to convert at least the audio portion of the video recording to a transcript. In some embodiments, the transcript includes transcribed descriptions of video elements of the recorded video conference, such as gestures or actions performed by the video conference participants including actions performed on a computer user interface such as while demonstrating a feature of the computer program. Other visual actions include user interface interactions such as zooming in on a user interface, clicking on a button, opening a dialog box, typing a description or name of an item, clicking, scrolling, hovering, closing a user interface element, and reloading a user interface element, among others. In various embodiments, the transcript can be generated in real-time, such as during a video conference, and/or the transcript can be based on an audio recording; Examiner interprets the “transcript of a team meeting describing project goals and functionality” as the “project-specific user inputs”), wherein the project-specific user inputs comprise estimated material [amount] associated with the respective project, …, estimated labor [hours] associated with the respective project, current actual [progress] expended during execution of the respective project (Paragraph 0013, In some embodiments, the estimated resources can include the number of hours, the number of developers required, hardware and/or software resources, etc. In some embodiments, the estimated resources can be constrained based on availability, such as the number of available developers including identifying developers with different levels of experience and ability; Paragraph 0017, In various embodiments, based on the tracked data gathered from tracking the task, predictions can be made on future tasks such as the estimated amount of resources such as time, developers, and/or hardware required to perform one or more steps of the task; Paragraph 0017, In some embodiments, based on the generated specification of the task, the task is automatically tracked using the computer program development project management software. For example, the automatically generated task specification is entered into the computer program development project management software for tracking the task, such as the progress of the task; Paragraph 0041, In some embodiments, the tracking includes tracking resources utilizes by the task as progress is made in the completion of the task), estimated project timeline for the respective project, a project start date for the respective project, and a current project progress metric associated with the respective project (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project); determining, using an evaluation module of the value attribution framework, based at least upon the project-specific user inputs for the respective projects of the plurality of projects associated with the organization, a plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required); determining, using the evaluation module of the value attribution framework, a current state for the respective projects of the plurality of projects associated with the organization (Paragraph 0017, In some embodiments, based on the generated specification of the task, the task is automatically tracked using the computer program development project management software. For example, the automatically generated task specification is entered into the computer program development project management software for tracking the task, such as the progress of the task); determining, using the evaluation module of the value attribution framework, a desired future state for the respective projects of the plurality of projects associated with the organization (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks); predicting, using one or more analytical models in the evaluation module of the value attribution framework, based at least on the plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, a plurality of project-specific outputs, wherein respective project-specific outputs are associated with the respective projects of the plurality of projects associated with the organization (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required); and providing an organizational output based upon the plurality of project-specific outputs (Paragraph 0030, The output of a pre-trained large language model can conform to project specifications required and used by a computer program development project management service. The trained models can also include one or more analytics models for predicting resource metrics such as the amount of effort required to complete a project or a project task). Although Goldberg et al. discloses wherein the project-specific user inputs comprise estimated resources (e.g., time, developers, and/or hardware required to perform one or more steps of the task) and progress of the task, Goldberg et al. does not specifically disclose wherein the progress of the task comprises current actual material costs and current actual labor costs. However, Nikolaev et al. discloses wherein the project-specific user inputs comprise estimated material costs associated with the respective project, current actual material costs associated with the respective project, estimated labor costs associated with the respective project, current actual labor costs expended during execution of the respective project (Column 2, lines 40-51, In some embodiments, the plurality of variables include a scope variable representing a scope of tasks executable in a project, a resource variable representing an amount of resources available to a project, a delivery time variable representing a time limit for completing a project, a finances variable representing an amount of financial resources available to a project, and a non-labor resource variable representing an amount of non-labor resources available to a project. In some embodiments, the variable presenting a probability of project success is conditionally dependent on at least one of the scope, resource, delivery time, finances or non-labor resource variable; Column 4, lines 14-21, Resource information for a completed project can identify a plurality of resources that were consumed by the project. These resources can range from human personnel (e.g., computer programmers, accountants, employees, consultants, etc.) to physical resources (e.g., a computer resources, infrastructure resources such as a geographic locations or buildings/office space, any type of supply or manufacturing material, physical equipment items, etc.); Column 6, lines 5-30, FIG. 3 shows another exemplary probabilistic graphical model 300 generated using flowchart 100 of FIG. 1 to predict the success of a project. The model 300 includes at least a scope variable 302 representing, for example, the scope and type of a task planned for a project (e.g., task complexity), a resource variable 304 representing, for example, a skill level of a human resource available to complete the project, a delivery time variable 306 representing, for example, a time limitation for completing the project, a finances variable 308 representing, for example, the cost allocated to complete the project and a non-labor resource variable 310 representing, for example, a non-labor related resource allocated for completing the project. These variables all have a causal effect on the project variable 312, which represents the likelihood of success of a project. The project variable 312 can in turn influence the state of one or more other variables, such as a delivered scope variable 314, a spend finances variable 316 and a used resources variable 318. The delivered scope variable 314 can indicate the scope of the project delivered and can assume one of three states--partial scope, full scope or extended scope. The spend finances variable 316 can indicate the amount of finances spent on the project and can assume three states--under budget, on budget or over budget. The used resources variable 318 can indicate the amount of resources consumed by the project and can assume three states--partial utilization, full utilization or over utilization). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for organizational transformation from a current state to a target state (e.g., identifying a task completion pathway according to the desired goal or specification) of the invention of Goldberg et al. to further specify wherein the identification of the possible task completion pathways is performed using a Markov model of the invention of Nikolaev et al. because doing so would allow the method to use a Markov rule-based approach to estimate the probability of success of a new or in-flight project based on a number of possible values (see Nikolaev et al., Column 5, lines 19-46). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 17, which is dependent of claim 16, the combination of Goldberg et al. and Nikolaev et al. discloses all the limitations in claim 16. Goldberg et al. further discloses calculating, using the evaluation module of the value attribution framework, based on the plurality of project-level micro-behavior-based performance metrics for the respective projects of the plurality of projects associated with the organization, current project-level entropy scores for the respective projects of the plurality of projects associated with the organization (Paragraph 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality; Paragraph 0032, An analytics model can be trained to predict resource metrics such as resource estimates and/or utilization including estimated amounts of work estimates; Paragraph 0048, At 601, an analytics machine learning model is trained. For example, a machine learning model is trained using analytics data including resource analytics data. The training data can include resource metrics data from projects as well as data related to project resources and resource constraints such as the number of developers required and the skills and skill levels of the developers. In various embodiments, the training data can be project data from the same customer such as past and current project data limited to only the same customer. In some embodiments, the training data is project data aggregated across multiple customers and can be aggregated based on customers with similar requirements. In various embodiments, the training data can be anonymized as part of preparing the data for use in training; Paragraph 0037, At 401, a new project transcript and/or project data is received. For example, project data describing a project is received. The received project data can include a transcript of a project meeting, such as a transcript of a team meeting describing and/or discussing project features, the project status, the project goals, etc.; Paragraph 0049, The retrieved context data can include project information such as tasks, resource requirements, resource expectations, resource allocations, resource constraints, project dependencies, project time constraints, project deadlines, etc. For example, retrieved resource allocations can include information that a certain number of developers of a certain skill set have been reserved for the project. As another example, retrieved resource constraints can include a requirement that a senior user interface developer is required to perform a sub-task of a project feature. Other retrieved information can include the members assigned to the project and their availability, the progress of the project including the current status of delays, the existing project goals and deadlines, and other dependencies of the project; As stated in Paragraph 0184 of Applicant’s specification, the entropy score may be alignment of actions and resources toward successfully achieving the target state. Therefore, based on broadest reasonable interpretation in light of the specification, Goldberg et al. discloses an entropy score since it calculates project deviations to target goals such as delays). Regarding claim 18, which is dependent of claim 16, the combination of Goldberg et al. and Nikolaev et al. discloses all the limitations in claim 16. Goldberg et al. further discloses receiving, at the evaluation module of the value attribution framework, historical project-level data for historical projects associated with the organization (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required). Regarding claim 19, which is dependent of claim 16, the combination of Goldberg et al. and Nikolaev et al. discloses all the limitations in claim 16. Goldberg et al. further discloses wherein the historical project-level data comprises one or more of: initially estimated material costs associated with respective historical projects, actual material costs associated with the respective historical projects, initially estimated labor costs associated with the respective historical projects, actual labor costs expended during execution of the respective historical projects, initially estimated project timeline for the respective historical projects, an actual project start date for the respective historical projects, or an actual project end date for the respective historical projects (Paragraph 0013, In various embodiments, based on the provided history of resource usage and available resources, the trained model can estimate the resource requirements associated with a task and/or project. In some embodiments, the estimated resource is an aggregate value such as an agreed upon metric or unit of measure. In some embodiments, the estimated resource metric is a unit of measure for expressing an estimate of the overall effort required and approximates the number of hours, team members, and/or other resources required; It can be noted that the claim language is written in alternative form. The limitation taught by Goldberg et al. is based on “initially estimated project timeline for the respective historical projects"). Regarding claim 20, which is dependent of claim 16, the combination of Goldberg et al. and Nikolaev et al. discloses all the limitations in claim 16. Goldberg et al. further discloses wherein the one or more analytical models in the evaluation module of the value attribution framework comprise one or more of: a dissipative structure model, a lookalike model, a Bayesian priors model, a fault-tree analysis model, a decision-tree analysis model, a common language model, a large language model, or a cost-benefit attribution logical analysis model (Paragraphs 0012, Using the disclosed techniques, the new tasks can be automatically generated based on analyzing a transcript that describes the desired functionality, such as a transcript of a team meeting describing project goals and functionality. Using the transcript, which itself can be automatically generated, such as from a video conference recording, and any existing relevant project context retrieved from the computer program development project management service, a large language model can be used to automatically determine and generate the desired scheduled tasks. For example, the large language model can be provided with a generative artificial intelligence (AI) prompt that embeds the retrieved project context and relevant transcript details. In response to a provided generative AI prompt, a scheduled project task is automatically generated for and tracked by the computer program development project management service; It can be noted that the claim language is written in alternative form. The limitation taught by Goldberg et al. is based on a large language model). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Krunic et al. (US 2025/0322341 A1) – discloses a processing system inputs information from various sources to an LLM and, in response, receives from the LLM graph data characterizing a graph of the market (e.g., as it is today). Further, based on a receiving data characterizing a given marketing strategy, the processing system generates queries necessary for long term inference structure. The processing system then generates node data identifying nodes of the graph of the market that match each of the generated queries. The processing system can also receive a query request, and can match the query request to one of the generated queries. Based on the matched query, the processing system generates resolution data characterizing at least one Customer Value Propositions (see at least Paragraph 0029). Nadel et al. (US 2026/0004207 A1) – discloses pproaches described herein provide technical solutions to technical problems in the deployment of machine learning models, particularly LLMs, for goal-oriented use cases. The technical solutions use a multi-agent, multi-player channel where agent objects (also called software agents, AI agents, bots, or simply agents) and humans work together to achieve an outcome. The approaches improve the process of problem solving and project management using LLMs by creating multiple agents with distinct skills and working memory that work together to complete tasks in an agentic manner and evaluate the output of each task against the initial goal (see at least Paragraph 0030). Zhu (CN 119026864 A) – discloses provides a field scheduling management system for building construction, which is used for providing a comprehensive and real-time management platform, capable of effectively coordinating various resources of the construction field, optimizing the construction plan and improving the decision supporting ability (see at least Page 3). Schroder (Schroder, M., 2023. Autoscrum: Automating project planning using large language models. arXiv preprint arXiv:2306.03197) – discloses recent advancements in the field of large language models have made it possible to use language models for advanced reasoning. In this paper we leverage this ability for designing complex project plans based only on knowing the current state and the desired state. Two approaches are demonstrated - a scrum based approach and a shortcut plan approach. The scrum based approach executes an automated process of requirements gathering, user story mapping, feature identification, task decomposition and finally generates questions and search terms for seeking out domain specific information to assist with task completion (see at least Abstract & 4.2 Requirement Identification). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARJORIE PUJOLS-CRUZ/Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Dec 23, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month