DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/23/25 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. 101 rejection of the claims, Applicant argues that the independent claims as amended involve components invoking trained generative models that are inherently machine based and cannot be performed mentally and that a human cannot invoke a trained neural network and as such, the claims do not fall within the mental processes category of abstract ideas (Arguments, pg. 9, fourth para. -0 pg. 10, third para.).
Examiner respectfully disagrees as “wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data” corresponds to a data evaluation/analysis step of generating data from input using a model or service (i.e., the use/invoking of a generic computer component), where using a generic computer to perform the step corresponds to tying an abstract idea to a generic computer. Furthermore, there is no recitation of actively training a model nor utilizing a trained model in the claim language like Applicant argues.
Applicant also argues that the claimed machine cognition workflow engine generates workflow instances, performs verification at intermediate stages, and selectively rewinds execution to earlier components using different models or agents, while improving the functioning of Al-based computing systems by reducing hallucinations, cascading errors, and incomplete responses, that the rewinding mechanism is not a generic control loop, and that the claimed verification response generation processes dynamically alters which models are invoked and which agents are used, and as such, argues that the claims provide an improvement to AI based systems (Arguments, pg. 10, fourth - Eighth para.).
Examiner respectfully disagrees as the workflow generation as well as the verification steps correspond to data analysis steps as identified in the rejection (9/30/25). There is no disclosure in the claims regarding rewinding execution to earlier components using different models or agents nor a verification response generation process that dynamically alters which models are invoked, like Applicant argues. Nevertheless, reverting back to a previous analysis i.e., a re-analysis and a verification step corresponds to a mental process of performing addition analysis/evaluation without providing significantly more. Also, reducing error/hallucinations and incomplete responses corresponds to reducing errors in the responses generated/provided i.e., reducing errors in the abstract idea, but not the computer or other technology. See Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1364 (Fed. Cir. 2020) – “It is not enough, however, to merely improve a fundamental practice or abstract process by invoking a computer merely as a tool.”.
Applicant further argues that the instant claims do not merely use machine learning as a black box, but instead recite how machine learning models are orchestrated, verified, and re-invoked to achieve improved system behavior, providing specific improvements to machine learning processes, and as such, argues that the claims do not recite well-understood, routine, and conventional functions (Arguments, pg. 10, ninth para. – pg. 11, fourth para.).
Examiner respectfully disagrees as the claims involve merely invoking/using a machine learning model/agent service to generate context from input data. There is no improvement to the functioning of the machine learning model/agent nor to the computer implementing the model as a result of the claim language.
Regarding the 35 U.S.C. 103 rejection of the claims with references Joynt and Poirier, Applicant argues that the provisional application 63/543,454 (filed 10/10/23) of reference Joynt US 2025/0117388 A1 (filed 4/23/24) fails to provide support for para. [0033] and [0056] of Joynt, and as such, argues that Joynt does not qualify as prior art against the instant invention (filed 3/11/24), and as such, requests a withdrawal of the rejection (Arguments, pg. 11, fifth para. – pg. 13). Examiner respectfully disagrees.
Joynt, for the argued paragraphs provide the following:
[0033] Selection module 204 is responsible for identifying, for each step provided by planner 202, an approach to executing that step using specialist models 220. As described in greater detail below, selection module 204 is responsible for identifying, for each step identified by planner 202, a corresponding set of models to be queried at that particular step and a method by which this/these model(s) are to be queried. Selection module 204 can, by way of example, identify a subset of models suitable for execution each step from among all models (210 and 220) available within complex prompt handling system 110, and provide a prompt (general or step-specific, and either generated by planner 202 as a part of the plan, or generated by selection module 204 based on the plan and model record 206; see below) corresponding to the step in question to the selected each of the models of that subset.
[0056] Process 600b represents a different approach from process 600a, discussed above. Specifically, process 600b describes a feed-forward (i.e. serial) approach to processing step 506. According to process 600b, step prompt 602 is transmitted to one specialist model 220a, which produces a corresponding intermediate output 610a. This intermediate output 610a is provided as input to another specialist model 220d, which likewise produces a corresponding intermediate output 610d. This approach can contain any number of serially-linked specialist models 220 (illustrated as three models 220a, 220d, 220k), the final intermediate output 610k of which is provided to integration module 208 for generation of step output 608b responsive to step 506. In some versions of process 600b, some or all downstream specialist models (e.g., 220d, 220k) can also be provided with step prompt 602 (i.e., in addition to feed-forward inputs from a preceding model). Similarly, integration module 208 can also receive non-terminal intermediate outputs (e.g., 610a, 610d) in some versions of process 600b, and aggregate these results when producing step output 608b.
According to MPEP 211.05, “Under 35 U.S.C. 119(e), the written description and drawing(s) (if any) of the provisional application must adequately support and enable the subject matter claimed in the nonprovisional application that claims the benefit of the provisional application.” (emphasis added)
Figures 5 and 6 as well as paragraph [0004], [0024], and [0044]-[0045] of the provisional application describe decomposing a user’s complex prompt into a plan having multiple steps, the use of a selection module 204 to select one of the plurality of specialized LLMs 220a-n to execute each of the multiple of steps/tasks, delegating the steps/tasks as appropriate to the specialist models 220a-n in a parallel (fig. 5) or serial manner (fig. 6), where in the serial manner, an output of specialist model 220a responsive to task/step 506a is provided as an input to specialist model 220d/e in order to handle task 506, the output of specialist model 220d/e responsive to the aforementioned inputs is provided as an input to specialist model 220n, and the output of specialist model 220n is provided directly to integration module 208. This is consistent with Joynt’s disclosure in para. [0033] and [0056]. Applicant does not identify any limitation of the independent claims that is unaddressed by Joynt or the provisional application of Joynt.
Therefore, Examiner maintain that the provisional application provides support for the paragraphs of Joynt, and as such, the rejection of the independent claims as well as the rejection for claims dependent therefrom is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of query analysis without significantly more. The claims 1, 11 and 20 recite steps of a machine cognition workflow engine with a rewinding mechanism, configured to: receive a prompt (i.e., a data collecting/gathering step), extract a message and a context of the prompt (i.e., a data analysis step), generate a workflow instance based on the context (i.e., a data analysis step), execute the generated workflow instance comprising a plurality of components (i.e., a data analysis step), execute a first set of components of the plurality of components, including a first component, to generate a first context (i.e., a data analysis step), based on the first context, execute a second component following the first set of components to generate a second context (i.e., a data analysis step), perform verification of the second context to generate a verification response (i.e., a data analysis/evaluation step), determine whether the verification response is below a first predetermined threshold (i.e., a judgement step), responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context (i.e., a judgement step), responsive to determining that the verification response is above the first predetermined threshold, execute a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt (i.e., a judgement step), and output the generated response for the prompt (i.e., a post solutional step of providing output step), wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data (i.e., a data analysis step of generating data from input), corresponding to steps achievable by a human in mentally/manually analyzing data and providing output as a result of the analysis, and as such, the steps correspond to the mental processes category of abstract ideas This judicial exception is not integrated into a practical application because the claims are directed to an abstract idea with additional generic computer elements, where the generically recited computer elements (system, engine, components, computing method, processing circuitry, storage device) do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because steps “responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context”, “responsive to determining that the verification response is above the first predetermined threshold, execute a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt”, “outputting the generated response for the prompt” and “wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data” correspond to well-understood, routine, conventional computer functions of “gathering and analyzing information using conventional techniques and displaying the result” and “collecting information, analyzing it, and displaying certain results of the collection and analysis” and “invoking computers or other machinery merely as a tool to perform an existing process” as recognized by the court decisions listed in MPEP § 2106.05 and as provided by cited references Joynt and Poirier (PTO 892, 9/30/25).
The dependent claims 2-10 and 12-19 also recite mental processes and do not add significantly more than the abstract idea and are as such similarly rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
1. Claims 1-6, 10-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Joynt US 2025/0117388 A1 (“Joynt”) in view of Poirier et al US 2024/0202539 A1 (“Poirier”)
Per claim 1, Joynt discloses a computing system, comprising:
a machine cognition workflow engine, configured to: receive a prompt (The term “compound prompt” can refer to a prompt explicitly including multiple separate tasks or steps, e.g., “(1) identify the three highest-selling jazz musicians of the 1970s, and then (2) generate a report comparing the musical styles of these three musicians.” … this disclosure will treat complex and compound prompts as equivalent …, para. [0017]; para. [0024]);
extract a message and a context of the prompt (para. [0017]; para. [0028]; Planner 202 can, for example, be a planner such as used in conventional systems such as Semantic Kernel or LangChain. In the most general case, planner 202 can be any suitable natural language processing (NLP) agent capable of identifying a plurality of actionable tasks (i.e., steps) for the resolution of complex prompt 120. Planner 202 can, for example, make use of model 210 for generative production of a response to complex prompt 120 that identifies these actionable tasks …, para. [0032]; para. [0035], compound/complex prompt as including current step/message and previous/context step/message);
generate a workflow instance based on the context (fig. 5, element 504; This method uses a plurality of specialized large language models (LLMs) includes decomposing the compound prompt into a plan with multiple steps. For each step, an approach defining a subset of the specialized LLMs is selected and executed to produce multiple model outputs, and these model outputs are collectively used to generate a step output. The step outputs associated with each step are assembled into a syntactically and semantically coherent final output …, para. [0004]; para. [0029]; para. [0033]);
execute the generated workflow instance comprising a plurality of components (fig. 5, elements 204, 504; This method uses a plurality of specialized large language models (LLMs) includes decomposing the compound prompt into a plan with multiple steps. For each step, an approach defining a subset of the specialized LLMs is selected and executed to produce multiple model outputs …, para. [0004]; As shown in FIG. 2, complex prompt handling system includes manager 200 and several specialist models 220a-n …, para. [0028]; para. [0029]; para. [0033]);
execute a first set of components of the plurality of components, including a first component, to generate a first context (fig. 5, elements 220; fig. 6a; para. [0037]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], LLM models as including specialist models 220a-n, output of first subset of models/LLMs as provided as input to second model of the subset of models 220);
based on the first context, execute a second component following the first set of components to generate a second context (fig. 5; fig. 6a; fig. 6b; para. [0037]; para. [0056]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], output of second subset model/LLM as provided as input to third model among models 220a-n);
wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data (Selection module 204 is responsible for identifying, for each step provided by planner 202, an approach to executing that step using specialist models 220…., para. [0033])
Joynt does not explicitly disclose a machine cognition workflow engine with a rewinding mechanism, perform verification of the second context to generate a verification response, determine whether the verification response is below a first predetermined threshold, responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context, responsive to determining that the verification response is above the first predetermined threshold, execute a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt or output the generated response for the prompt
However, these features are taught by Poirier;
a machine cognition workflow engine with a rewinding mechanism (fig. 8A);
perform verification of the second context to generate a verification response (retriever models (e.g., retriever models or a retrieval agent) can provide additional retrieved information to the large language models to generate additional context-based synthetic output until context validation criteria is satisfied.…, para. [0065]);
determine whether the verification response is below a first predetermined threshold (para. [0065]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context (fig. 8A; para. [0163]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
responsive to determining that the verification response is above the first predetermined threshold, execute a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt (para. [0172]; para. [0243]-[0244]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response …, para. [0252]); and
output the generated response for the prompt (para. [0250])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Poirier with the method of Joynt in arriving at the missing features of Joynt, because such combination would have resulted in avoiding or mitigating the output of inconsistent or inaccurate results (Poirier, para. [0244]).
Per claim 2, Joynt in view of Poirier discloses the computing system of claim 1,
Joynt discloses wherein the first component is initially configured to generate the first context via a first response strategy (fig. 5; fig. 6b; para. [0056]; para. [0066]); and
Joynt discloses first context is generated via a second response strategy (fig. 6b)
Poirier discloses when the first context is regenerated by the first component responsive to determining that the verification response is below the first predetermined threshold, the first context is regenerated via a second response strategy (fig. 8A; para. [0163]).
Per claim 3, Joynt in view of Poirier discloses the computing system of claim 2,
Joynt discloses wherein the first context generated via the first response strategy is generated via a first generative model (fig. 5; fig. 6b; para. [0029]; para. [0056]); para. [0066]; and
Poirier discloses the first context generated via the second response strategy is generated via a second generative model (fig. 8A, elements 808).
Per claim 4, Joynt in view of Poirier discloses the computing system of claim 2,
Poirier discloses: wherein the first context generated via the first response strategy is generated via a first agent (fig. 8A; para. [0029]; para. [0163]); and
the first context generated via the second response strategy is generated via a second agent (fig. 8A; para. [0029]).
Per claim 5, Joynt in view of Poirier discloses the computing system of claim 2,
Joynt discloses wherein the first context generated via the first response strategy is generated via a first parallel processing pathway (fig. 6b; para. [0066]); and
the first context generated via the second response strategy is generated via a second parallel processing pathway (fig. 6b; para. [0066])
Per claim 6, Joynt in view of Poirier discloses the computing system of claim 1,
Joynt discloses wherein outputs of the plurality of components are machine learning model outputs generated via multi-stage machine learning model chaining via the plurality of components (fig. 6b; para. [0066]).
Per claim 10, Joynt in view of Poirier discloses the computing system of claim 1,
Poirier discloses wherein responsive to determining that the verification response is below a second predetermined threshold, the first context is regenerated by executing the first set of components and a second set of components preceding the first set of components (fig. 8A; para. [0243]; para. [0252])
Joynt in view of Poirier does not explicitly disclose the second predetermined threshold below the first predetermined threshold
However, it would have been obvious to one of ordinary skill in the art before the effective filing of the invention to try to implement the second predetermined threshold below the first predetermined threshold by using Poirier’s first and second thresholds as a matter of design choice, so as to provide an alternate method of validating a response.
Per claim 11, Joynt discloses a computing method, comprising:
receiving a prompt (The term “compound prompt” can refer to a prompt explicitly including multiple separate tasks or steps, e.g., “(1) identify the three highest-selling jazz musicians of the 1970s, and then (2) generate a report comparing the musical styles of these three musicians.” … this disclosure will treat complex and compound prompts as equivalent …, para. [0017]; para. [0024]);
extracting a message and a context of the prompt (para. [0017]; para. [0028]; Planner 202 can, for example, be a planner such as used in conventional systems such as Semantic Kernel or LangChain. In the most general case, planner 202 can be any suitable natural language processing (NLP) agent capable of identifying a plurality of actionable tasks (i.e., steps) for the resolution of complex prompt 120. Planner 202 can, for example, make use of model 210 for generative production of a response to complex prompt 120 that identifies these actionable tasks …, para. [0032]; para. [0035], compound/complex prompt as including current step/message and previous/context step/message);
generating a workflow instance based on the context (fig. 5, element 504; This method uses a plurality of specialized large language models (LLMs) includes decomposing the compound prompt into a plan with multiple steps. For each step, an approach defining a subset of the specialized LLMs is selected and executed to produce multiple model outputs, and these model outputs are collectively used to generate a step output. The step outputs associated with each step are assembled into a syntactically and semantically coherent final output …, para. [0004]; para. [0029]; para. [0033]);
executing the generated workflow instance comprising a plurality of components (fig. 5, elements 204, 504; This method uses a plurality of specialized large language models (LLMs) includes decomposing the compound prompt into a plan with multiple steps. For each step, an approach defining a subset of the specialized LLMs is selected and executed to produce multiple model outputs …, para. [0004]; As shown in FIG. 2, complex prompt handling system includes manager 200 and several specialist models 220a-n …, para. [0028]; para. [0029]; para. [0033]);
executing a first set of components of the plurality of components, including a first component, to generate a first context (fig. 5, elements 220; fig. 6a; para. [0037]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], LLM models as including specialist models 220a-n, output of first subset of models/LLMs as provided as input to second model of the subset of models 220);
based on the first context, execute a second component following the first set of components to generate a second context (fig. 5; fig. 6a; fig. 6b; para. [0037]; para. [0056]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], output of second subset model/LLM as provided as input to third model among models 220a-n);
wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data (Selection module 204 is responsible for identifying, for each step provided by planner 202, an approach to executing that step using specialist models 220…., para. [0033])
Joynt does not explicitly disclose performing verification of the second context to generate a verification response, determine whether the verification response is below a first predetermined threshold, responsive to determining that the verification response is below the first predetermined threshold, executing the first set of components again to regenerate the first context and the second context, responsive to determining that the verification response is above the first predetermined threshold, executing a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt or outputting the generated response for the prompt
However, these features are taught by Poirier;
performing verification of the second context to generate a verification response (retriever models (e.g., retriever models or a retrieval agent) can provide additional retrieved information to the large language models to generate additional context-based synthetic output until context validation criteria is satisfied.…, para. [0065]);
determining whether the verification response is below a first predetermined threshold (para. [0065]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context (fig. 8A; para. [0163]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
responsive to determining that the verification response is above the first predetermined threshold, execute a remainder of the plurality of components of the generated workflow instance based on the second context to generate a response for the prompt (para. [0172]; para. [0243]-[0244]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response …, para. [0252]); and
outputting the generated response for the prompt (para. [0250])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Poirier with the system of Joynt in arriving at the missing features of Joynt, because such combination would have resulted in avoiding or mitigating the output of inconsistent or inaccurate results (Poirier, para. [0244]).
Per claim 12, Joynt in view of Poirier discloses the computing method of claim 11,
Joynt discloses wherein the first component is initially configured to generate the first context via a first response strategy (fig. 5; fig. 6b; para. [0056]; para. [0066]); and
Joynt discloses first context is generated via a second response strategy (fig. 5; fig. 6b; para. [0066])
Poirier discloses when the first context is regenerated by the first component responsive to determining that the verification response is below the first predetermined threshold, the first context is regenerated via a second response strategy (para. [0163]).
Per claim 13, Joynt in view of Poirier discloses the computing method of claim 12,
Joynt discloses wherein the first context generated via the first response strategy is generated via a first generative model (fig. 6b; para. [0029]; para. [0056]; para. [0066]); and
Poirier discloses the first context generated via the second response strategy is generated via a second generative model (fig. 8A, elements 808).
Per claim 14, Joynt in view of Poirier discloses the computing method of claim 12,
Poirier discloses: wherein the first context generated via the first response strategy is generated via a first agent (fig. 8A; para. [0029]; para. [0163]); and
the first context generated via the second response strategy is generated via a second agent (fig. 8A; para. [0029]).
Per claim 15, Joynt in view of Poirier discloses the computing method of claim 12,
Joynt discloses wherein the first context generated via the first response strategy is generated via a first parallel processing pathway (fig. 6b); and
the first context generated via the second response strategy is generated via a second parallel processing pathway (fig. 6b)
Per claim 16, Joynt in view of Poirier discloses the computing method of claim 11,
Joynt discloses wherein outputs of the plurality of components are machine learning model outputs generated via multi-stage machine learning model chaining via the plurality of components (fig. 5; fig. 6b; para. [0066]).
Per claim 20, Joynt discloses a computing system, comprising:
processing circuitry (para. [0020]); and
a storage device storing a program executable by the processing circuitry to: execute a workflow instance comprising a plurality of components (fig. 5, elements 204, 504; This method uses a plurality of specialized large language models (LLMs) includes decomposing the compound prompt into a plan with multiple steps. For each step, an approach defining a subset of the specialized LLMs is selected and executed to produce multiple model outputs …, para. [0004]; para. [0020]; As shown in FIG. 2, complex prompt handling system includes manager 200 and several specialist models 220a-n …, para. [0028]; para. [0029]; para. [0033]);
execute a first set of components of the plurality of components, including a first component, to generate a first context (fig. 5, elements 220; fig. 6a; para. [0037]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], LLM models as including specialist models 220a-n, output of first subset of models/LLMs as provided as input to second model of the subset of models 220);
based on the first context, execute a second component following the first set of components to generate a second context (fig. 5; fig. 6a; fig. 6b; para. [0037]; para. [0056]; para. [0056]; a first subset traversal method whereby model outputs are generated using the subset of the plurality of LLMs, in parallel; and a second subset traversal method whereby model outputs are generated in series, with an output of at least a first of the subset of the plurality of LLMs used as an input of at least a second subset of the plurality of LLMs, para. [0066], output of second subset model/LLM as provided as input to third model among models 220a-n);
generate a response based on the second context (fig. 5; fig. 6b); and
output the generated response (fig. 5; fig. 6b);
wherein executing each component comprises invoking a machine-learning model or agent service configured to generate a machine-derived context from input data (Selection module 204 is responsible for identifying, for each step provided by planner 202, an approach to executing that step using specialist models 220…., para. [0033])
Joynt does not explicitly disclose perform verification of the second context to generate a verification response, determine whether the verification response is below a first predetermined threshold, responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context
However, these features are taught by Poirier;
perform verification of the second context to generate a verification response (retriever models (e.g., retriever models or a retrieval agent) can provide additional retrieved information to the large language models to generate additional context-based synthetic output until context validation criteria is satisfied.…, para. [0065]);
determine whether the verification response is below a first predetermined threshold (para. [0065]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
responsive to determining that the verification response is below the first predetermined threshold, execute the first set of components again to regenerate the first context and the second context (fig. 8A; para. [0163]; The method may comprise validating the one or more responses to the prompt…. The validating may comprise not validating a response to the prompt if a measure of similarity and/or consistency (between that response to the prompt and the additional data) is not greater than a second threshold …, para. [0243]; The context validation criteria may include a threshold for identifying source material from an enterprise data system that corroborate the response…., para. [0252]);
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Poirier with the system of Joynt in arriving at the missing features of Joynt, because such combination would have resulted in avoiding or mitigating the output of inconsistent or inaccurate results (Poirier, para. [0244]).
2. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Joynt in view of Poirier as applied to claims 1 and 11 above, and further in view of Kotikalapudi et al US 2024/0394471 A1 (“Kotikalapudi”)
Per claim 9, Joynt in view of Poirier discloses the computing system of claim 1,
Joynt in view of Poirier does not explicitly disclose wherein the verification response is recorded and outputted as a verification log.
However, this feature is taught by Kotikalapudi (para. [0113]-[0115])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Kotikalapudi with the system of Joynt in view of Poirier in arriving at the missing features of Joynt in view of Poirier, because such combination would have resulted in providing training data for generating high quality responses (Kotikalapudi, para. [0010]; para. [0113]-[0115])
Per claim 19, Joynt in view of Poirier discloses the computing method of claim 11,
Joynt in view of Poirier does not explicitly disclose wherein the verification response is recorded and outputted as a verification log.
However, this feature is taught by Kotikalapudi (para. [0113]-[0115])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Kotikalapudi with the method of Joynt in view of Poirier in arriving at the missing features of Joynt in view of Poirier, because such combination would have resulted in providing training data for generating high quality responses (Kotikalapudi, para. [0010]; para. [0113]-[0115]).
Allowable Subject Matter
Claims 7, 8, 17 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658