Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 21-40 are currently pending and have been examined.
Claims 1-20 were previously canceled by applicant in a preliminary amendment.
Claim Objections
Claim 25-27 are objected to because of the following informalities:
Re-claim 25, recites the terms “the context data” and “the augmented data” when they should be “the user-related context data” and “the user-related augmented data”, as they appear to refer to the terms “user-related context data” and “user-related augmented data” in claims 22 and 23. Further, claims 26-27 are objected as being dependent on object claim 25.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/19/2024, 11/22/2024, 09/26/2025, 12/02/2025 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Form PTO-1449 is signed and attached hereto.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an interface system”, “a prompt generation processor”, “a data extractions system”, and “an API interaction system” in claim 37 and “a prompt/response evaluation processor” in claim 38.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim limitation “an interface system”, “a prompt generation processor”, “a data extractions system”, and “an API interaction system” in claim 37 and “a prompt/response evaluation processor” in claim 38 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. A review of the specification (par. 0052) describes each of these limitations as elements of the development platform 114. Further, they are described in Fig. 4. Notably, the “processors or servers 277” of Fig. 4 are comprised within the development platform 114 but the limitations that invoke 35 U.S.C. § 112(f) are depicted as separate elements other than the “processors or servers 277”. These limitations are not described, structurally, elsewhere in the instant specification and therefore sufficient corresponding structure of limitations that invoke 35 U.S.C. § 112(f) has not been provided. Examiner further notes that for computer-implemented technologies, structural support may be derived from a “computer” + “algorithm”, see MPEP § 2181, however, Examiner finds no support in the specification for a specific definite structure nor a general-purpose processor/computer programmed to carry out an algorithm corresponding the functions performed by the limitation which invokes 35 U.S.C. 112 (f),
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claim 37-38 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AlA), first paragraph, as failing to comply with the written description requirement.
The claims 37-38 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AlA the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above in 112(f) interpretation for the limitations “an interface system”, “a prompt generation processor”, “a data extractions system”, “an API interaction system” and “a prompt/response evaluation processor”, the disclosure does not provide adequate structure to perform the claimed functions. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. See MPEP § 2181(II)(B) “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a)’.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 37-38 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
As per claims 37-38, As described above in 112(f) interpretation, the limitation “an interface system”, “a prompt generation processor”, “a data extractions system”, “an API interaction system” and “a prompt/response evaluation processor”, without the detail about the means to accomplish the functions is not adequate disclosure of corresponding structure. The MPEP § 2181(II)(B) specifically indicated that “For a computer- implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[Wjhen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.) (quoting WMS Gaming, Inc. v. Int! Game Tech., 184).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 21, 28-32, 34 and 39-40 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Cai et al. (U.S. Pub. No. 20230112921 A1).
As per claim 21, Cai teaches the invention as claimed including a computer implemented method, comprising:
exposing an interface to a prompt generation processor in a generative artificial intelligence (AI) development system (par. 0002 transparent and controllable human-AI interaction via chaining of machine-learned language models, including … a graphical user interface for modularly building and/or editing a model chain; par. 0033 FIGS. 2A-C depict an example interactive interface for interacting with model chains according to example embodiments of the present disclosure);
receiving through the exposed interface a prompt generation input (par. 0059 a split points’ step/prompt that extracts each individual presentation problem from the original feedback [prompt generation input]; par. 0082, In FIG. 1B … the input of Feedback);
populating an editable prompt template with a prompt (par. 0083 Table 2, Example implementation for Ideation in Table 1 with (1) a prompt template that involves the task description, datatypes, and placeholders for inputs and outputs; par. 0060 the separate suggestion Ideation step in FIG. 1B, chaining allows users to customize which suggestions to include in the final paragraph; Fig. 1B, IDEATION);
detecting user interaction with the editable prompt template (par. 0093 Users frequently [interact] make local fixes on intermediate data points that flow between model steps, and therefore example interfaces can allow in-place editing, without explicitly switching to editing mode; par. 0086 a user can build concrete model steps (prompts therefor) simply by filling in the templates with data layer definitions);
generating a plurality of chained prompts, corresponding to a generative Al request, for a generative AI model, based on the user interaction (par. 0041 the provided user interface can enable the user to: construct and/or edit a new or existing model chain and/or view and edit the inputs, outputs, and/or prompts for each instantiation within the chain; par. 0043 Thus, the present disclosure introduces the notion of “chaining” multiple language model instantiations together across a number of different model prompts … In a chain, a problem can be broken down into a number of smaller sub-tasks, each mapped to a distinct step with a corresponding prompt);
providing the plurality of chained prompts to a generative Al model application programming interface (API) (par. 0113 For example, the calls (e.g., requests for inference) can be made to the models 190 using one or more application programming interfaces (APIs)); and
receiving a response from a generative AI model through the generative Al model API (par. 0059 third, a ‘compose points’ step/prompt that synthesizes all the problems and suggestions into a final friendly paragraph. The result is noticeably improved; Fig. 1B, Friendly Paragraph).
As per claim 28, Cai further teaches: accessing a set of prompts in a prompt library in the generative Al development system (par. 0013 In some implementations, the respective prompt to each model instantiation in the model chain is user-selectable from a number of pre-defined template prompts that correspond to primitive subtasks); generating the set of chained prompts based on at least one prompt in the prompt library; and storing the plurality of chained prompts in the prompt library (par. 0058 methods of the present disclosure can assist in resolving this issue by chaining multiple prompts together, so that the problem is broken down into a number of smaller sub-task; par. 0086 a user can build concrete model steps (prompts therefor) simply by filling in the templates with data layer definitions; par. 0101 users can be enabled to select from a number of pre-defined prompts … Likewise, pre-defined or default chains can be used as a starting place for various tasks as well. It is noted that the predefined chains are implied as necessarily being stored in a storage).
As per claim 29, Cai further teaches: wherein accessing a set of prompts comprises: searching the prompt library for prompts based on the prompt generation input to identify the set of prompts; generating an interface with selectable prompt identifiers corresponding to the identified set of prompts; and detecting a user selection input selecting one of the selectable prompt identifiers (par. 0045 users can be enabled to select from a number of pre-defined prompts. For example, each pre-defined prompt can correspond to a primitive operation that includes default prompting and data structures; par. 0086 a user can build concrete model steps (prompts therefor) simply by filling in the templates with data layer definitions; par. 0089 This section describes interactive interfaces which support users in interacting with model chains, including modifying the prompts and intermediate model outputs for each step, and customizing the Chains).
As per claim 30, Cai further teaches: wherein populating an editable prompt template comprises: retrieving a selected prompt corresponding to the selected prompt identifiers; and populating the editable prompt template with the selected prompt (par. 0083 Table 2, Example implementation for Ideation in Table 1 with (1) a prompt template that involves the task description, datatypes, and placeholders for inputs and outputs; par. 0086 Then, a user can build concrete model steps (prompts therefor) simply by filling in the templates with data layer definitions).
As per claim 31, Cai further teaches: detecting user interaction with the set of chained prompts in the selected prompt to generate the plurality of chained prompts (par. 0093 Users frequently [interact] make local fixes on intermediate data points that flow between model steps [prompts], and therefore example interfaces can allow in-place editing, without explicitly switching to editing mode; par. 0045 users can … select from a number of pre-defined prompt).
As per claim 32, Cai further teaches: generating an evaluation interface with the generative Al development system, the evaluation interface including the plurality of chained prompts and the response (par. 0059 in FIG. 1B, a LLM chain is used that includes … an ‘ideation’ step/prompt that brainstorms suggestions per problem; and third, a ‘compose points’ step/prompt that synthesizes all the problems and suggestions into a final friendly paragraph.).
As per claim 34, Cai further teaches: causing display of the evaluation interface with the plurality of chained prompts and the response for manual evaluation (par. 0059 in FIG. 1B, a LLM chain is used that includes … an ‘ideation’ step/prompt that brainstorms suggestions per problem; and third, a ‘compose points’ step/prompt that synthesizes all the problems and suggestions into a final friendly paragraph; par. 0013 the respective prompt to each model instantiation in the model chain is user-selectable from a number of pre-defined template prompts).
As per claim 39, Cai teaches a computing system (Fig. 6A, computer system 100), comprising:
at least one processor (Fig. 6A, Processor 132); and
memory that stores computer executable instructions which, when executed by the at least one processor (Fig. 6A, Memory 136; par. 00115 The memory 134 can store data 136 and instructions 138 which are executed by the processor 132), cause the at least one processor to perform steps comprising:
exposing an interface to a prompt generator in an artificial intelligence (Al) development system (par. 0002 transparent and controllable human-AI interaction via chaining of machine-learned language models, including … a graphical user interface for modularly building and/or editing a model chain; par. 0033 FIGS. 2A-C depict an example interactive interface for interacting with model chains according to example embodiments of the present disclosure);
receiving through the exposed interface a prompt generation input (par. 0059 a split points’ step/prompt that extracts each individual presentation problem from the original feedback [prompt generation input]; par. 0082, In FIG. 1B … the input of Feedback);
accessing a memory storing a set of prompts (par. 0013 pre-defined template prompts); generating an Al prompt for a generative Al model, based on the prompt generation input and a prompt in the prompt memory (par. 0083 Table 2, Example implementation for Ideation in Table 1 with (1) a prompt template that involves the task description, datatypes, and placeholders for inputs and outputs; par. 0060 the separate suggestion Ideation step in FIG. 1B, chaining allows users to customize which suggestions to include in the final paragraph; Fig. 1B, IDEATION);
calling a generative AI model accessing layer to send the prompt to the generative Al mode (par. 0113 For example, the calls (e.g., requests for inference) can be made to the models 190 using one or more application programming interfaces (APIs)); and
receiving a response from the generative AI model through the generative Al model accessing layer (par. 0059 third, a ‘compose points’ step/prompt that synthesizes all the problems and suggestions into a final friendly paragraph. The result is noticeably improved; Fig. 1B, Friendly Paragraph).
As per claim 40, Cai further teaches: generating, as the AI prompt, a plurality of sequential prompts, corresponding to the generative Al request for the generative Al model, based on the prompt generation input and the prompt in the prompt memory (par. 0041 the provided user interface can enable the user to: construct and/or edit a new or existing model chain and/or view and edit the inputs, outputs, and/or prompts for each instantiation within the chain; par. 0043 Thus, the present disclosure introduces the notion of “chaining” multiple language model instantiations together across a number of different model prompts … In a chain, a problem can be broken down into a number of smaller sub-tasks, each mapped to a distinct step with a corresponding prompt).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 22 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Cai in view of Gobran et al. (U.S. Pub. No. 20240054546 A1).
As per claim 22, Cai further teaches: receiving, through the exposed interface, a context data extraction input defining … context data to extract from one or more … systems for sending to the Al model API with the chained prompts (par. 0069, Table 1, b. Factual Query; Table 1, c. Info. Extraction: Extraction information from the context, Ex. Given text, extract airport codes per city text: I want to fly from Los Angeles to Miami airport codes: LAX, MIA; par. 0024 providing, for display within the user interface, the data indicative of the respective model output of the one or more of the model instantiations comprises providing the user interface in a chain view mode that depicts a structure of the model chain).
Cai does not expressly describe: user-related context data.
However, Gobran teaches: user-related context data (par. 0025 The user context can be obtained from one or more sub-systems; par. 0026 the user context can be provided to a machine learning model that has been trained to receive user context information as input).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of obtaining user context data and providing to a ML model of Gobran with the system and method of Cai resulting in system and method that provides transparent and controllable human-AI interaction that includes obtaining/extracting user-context data for providing to an AI model as in Gobran. A person of ordinary skill would have been motivated to make this combination for the purpose of permitting a model to adapt over time to user preferences within a given context (par. 0010).
As per claim 37, Cai teaches the invention as claimed including a generative artificial intelligence (AI) development system (par. 0002 transparent and controllable human-AI interaction via chaining of machine-learned language models, including … a graphical user interface for modularly building and/or editing a model chain), comprising:
an interface system configured to expose an Al development interface to receive generative Al system development user inputs (par. 0033 FIGS. 2A-C depict an example interactive interface for interacting with model chains according to example embodiments of the present disclosure);
a prompt generation processor (Fig. 6A, Processor 132) configured to receive an AI prompt generation user input from the AI development interface and generate an Al prompt based on the AI prompt generation user input (par. 0059 a split points’ step/prompt that extracts each individual presentation problem from the original feedback [prompt generation input]; par. 0082, In FIG. 1B … the input of Feedback);
a data extraction system configured to extract … context data from a user-related system based on a user data extraction input identifying the … context data (par. 0069, Table 1, b. Factual Query; Table 1, c. Info. Extraction: Extraction information from the context); and
an API interaction system configured to call a generative Al model application programming interface (API) to send the prompt and the user-related context data to a generative Al model and to receive a response from the generative Al model (par. 0113 For example, the calls (e.g., requests for inference) can be made to the models 190 using one or more application programming interfaces (APIs)).
Cai does not expressly describe: user-related context data.
However, Gobran teaches: user-related context data (par. 0025 The user context can be obtained from one or more sub-systems; par. 0026 the user context can be provided to a machine learning model that has been trained to receive user context information as input).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of obtaining user context data and providing to a ML model of Gobran with the system and method of Cai resulting in system and method that provides transparent and controllable human-AI interaction that includes obtaining/extracting user-context data for providing to an AI model as in Gobran. A person of ordinary skill would have been motivated to make this combination for the purpose of permitting a model to adapt over time to user preferences within a given context (par. 0010).
Claims 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Cai in view of Gobran, and further in view of Saxe et al. (U.S. Pub. No. 20230315722 A1).
As per claim 23, Cai further teaches: receiving, through the exposed interface, an … data extraction input defining … data, that the generative Al model processes in response to the plurality of chained prompts, to extract from the one or more … systems for sending to the Al model API with the chained prompts (par. 0069, Table 1, b. Factual Query; Table 1, c. Info. Extraction: Extraction information from the context, Ex. Given text, extract airport codes per city text: I want to fly from Los Angeles to Miami airport codes: LAX, MIA; par. 0024 providing, for display within the user interface, the data indicative of the respective model output of the one or more of the model instantiations comprises providing the user interface in a chain view mode that depicts a structure of the model chain).
Cai and Gobran do not expressly describe: user-related augmented data
However, Saxe teaches: data extraction input defining user-related augmented data (par. 0025 The processor is configured to receive, via an interface, natural language data associated with a user request; par. 0033 the NL system is configured to receive, via the NL interface, user provided corrections and provide the corrections to augment the training data used to train the ML model. Using the augmented training data, including the user provided corrections, the ML model can be retrained to improve its performance in predicting intent of the user, and generating the template query to match or be closer to the user's intent when provided with the natural language request).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of receiving via an NL interface NL dada, user provided corrections to augment training data and us it to train an ML model of Saxe with the system and method of Cai and Gobran resulting in system and method which provides for receiving or extracting user-augmented data in order to train an AI model as in Gobran. A person of ordinary skill would have been motivated to make this combination for the purpose of improving its performance in predicting intent of the user (par. 0033).
As per claim 24, Cai further teaches: wherein exposing an interface comprises: exposing a generative Al development environment creation interface (par. 0002 transparent and controllable human-AI interaction via chaining of machine-learned language models, including … a graphical user interface for modularly building and/or editing a model chain that includes a sequence of instantiations of one or more machine-learned language models; par. 0033 FIGS. 2A-C depict an example interactive interface for interacting with model chains according to example embodiments of the present disclosure); receiving a development environment creation input through the generative AI development environment creation interface (par. 0023 The method includes receiving an initial language input. The method includes providing a user interface that visualizes and enables a user to edit a model chain configured to process the initial language input to generate a language output); and assigning computer processing resources, including memory, to a generative AI development environment based on the development environment creation input (par. 0051 computational resources such as processor time, memory usage, network bandwidth, etc. In particular, in the case of a LLM, even a single re-training can consume a very significant amount of resources).
As per claim 25, Gobran further teaches: extracting the context data … from the one or more user-related systems (par. 0025 The user context can be obtained from one or more sub-systems; par. 0026 the user context can be provided to a machine learning model that has been trained to receive user context information as input; par. 0005 the method can further include obtaining updated user context data); and storing the extracted context data … in the memory assigned to the generative Al development input (par. 0065 the user context data can be stored in the user device only and deleted once the data has been used to make a content suggestion or automatic provision … the user context data can be processed and/or stored). Saxe further teaches: extracting the context data and the augmented data (par. 0025 The processor is configured to receive, via an interface, natural language data associated with a user request; par. 0033 the NL system is configured to receive, via the NL interface, user provided corrections and provide the corrections to augment the training data used to train the ML model. Using the augmented training data, including the user provided corrections, the ML model can be retrained to improve its performance in predicting intent of the user, and generating the template query to match or be closer to the user's intent when provided with the natural language request).
Claims 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Cai in view of Gobran and Saxe, and further in view of Hoffman et al. (U.S. Pub. No. 20110047176 A1).
As per claim 26 Gobran further teaches: wherein receiving the context data extraction input … (par. 0025 The user context can be obtained from one or more sub-systems; par. 0026 the user context can be provided to a machine learning model that has been trained to receive user context information as input; par. 0005 the method can further include obtaining updated user context data).
Saxe further teaches: receiving the augmented data extraction input … (par. 0025 The processor is configured to receive, via an interface, natural language data associated with a user request; par. 0033 the NL system is configured to receive, via the NL interface, user provided corrections and provide the corrections to augment the training data used to train the ML model. Using the augmented training data, including the user provided corrections, the ML model can be retrained to improve its performance in predicting intent of the user, and generating the template query to match or be closer to the user's intent when provided with the natural language request).
Cai, Gobran and Saxe do not expressly describe: receiving context data extraction script … augmented data extraction script.
However, Hoffman teaches: receiving context data extraction script … (par. 0039 Initially, extraction scripts are generated, and may be generated by the central system 210 or a third party not associated with the central system 210. These extraction scripts are installed or downloaded onto the site system ... As used herein, extraction scripts are software that, once executed, automatically extracts various types of data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of using extraction scripts to extract various types of data of Hoffman with the system and method of Cai, Gobran and Saxe resulting in a transparent and controllable human-AI interaction system and method that provides for receiving/downloading scripts as in Hoffman for extracting context or augmented data as in Hoffman. A person of ordinary skill in the art would have been motivated to make this combination for the purpose of automatically extracts various [context, augmented] types of data from the site system (par. 0039).
As per claim 27, Hoffman further teaches: wherein extracting the context data and the augmented data comprises; executing the context data extraction script; and executing the augmented data extraction script (par. 0039 generally, the site system 212 receives and executes extraction scripts that extract summary data from site databases).
Claim 33 and 35-36 is rejected under 35 U.S.C. 103 as being unpatentable over Cai in view of Zha et al. (U.S. Pub. No. 20240202458 A1).
As per claim 33, Cai further teaches: processing the plurality of chained prompts and response (par. 0058 chaining multiple prompts together) with a prompt-valuation generative Al model.
Cai does not expressly describe: to identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the plurality of chained prompts.
However, Zha teaches: identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the plurality of … prompts (par. 0046 Prompt discovery 224 may implement prompt and NLP ML evaluation 430, in various embodiments, in order to evaluate performance of the candidate prompt(s) and candidate NLP ML model(s) … In some embodiments, test data 435 may be maintained by machine learning service 210 … When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data 435. Results 434 for candidate prompts may be collected. In some embodiments, prompt and NLP ML evaluation 430 may perform an initial analysis by, for example, comparing candidate prompt results 434 sample output 413. Again, a similarity score with sample output 413 may be generated to determine how well candidate prompts and candidate NLP ML models performed, and used to rank or filter out candidate prompt(s) and models).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of evaluating performance of prompts and ML models of Zha with the system and method of Cai resulting in a transparent and controllable human-AI interaction system and method that provides evaluating/identifying the performance of chained prompts and models as in Zha. A person of ordinary skill in the art would have been motivated to make this combination for the purpose of providing for discovering prompts for different NLP tasks can help to optimize the performance of an NLP task for integration with a particular application. (par. 0027). Further, the combination would enhance the capabilities of systems, services, or applications to better interact with human users (par. 0020).
As per claim 35, Cai further teaches: processing the plurality of chained prompts and response (par. 0058 chaining multiple prompts together) with a model-evaluation generative AI model.
Cai does not expressly teach: to identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the generative Al model in generating a response to the plurality of chained prompts.
Zha further teaches: to identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the generative Al model in generating a response to the plurality of chained prompts (par. 0046 Prompt discovery 224 may implement prompt and NLP ML evaluation 430, in various embodiments, in order to evaluate performance of the candidate prompt(s) and candidate NLP ML model(s) … In some embodiments, test data 435 may be maintained by machine learning service 210 … When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data 435. Results 434 for candidate prompts may be collected. In some embodiments, prompt and NLP ML evaluation 430 may perform an initial analysis by, for example, comparing candidate prompt results 434 sample output 413. Again, a similarity score with sample output 413 may be generated to determine how well candidate prompts and candidate NLP ML models performed, and used to rank or filter out candidate prompt(s) and models).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of evaluating performance of prompts and ML models of Zha with the system and method of Cai resulting in a transparent and controllable human-AI interaction system and method that provides evaluating/identifying the performance of chained prompts and models as in Zha. A person of ordinary skill in the art would have been motivated to make this combination for the purpose of providing for discovering prompts for different NLP tasks can help to optimize the performance of an NLP task for integration with a particular application. (par. 0027). Further, the combination would enhance the capabilities of systems, services, or applications to better interact with human users (par. 0020).
As per claim 36, Cai further teaches: generating an additional response to the plurality of chained prompts with an additional generative Al model and comparing the response from the generative Al model with the additional response from the additional generative Al model to determine whether the generative AI model or the additional generative Al model performed better (par. 0042 As examples, users of the system can leverage sub-tasks to calibrate model expectations; compare and contrast alternative strategies by observing parallel downstream effects; par. 0047 . For example, users can: model expectations using the smaller scope of sub-tasks; explore alternative prompting strategies by comparing parallel downstream effects).
Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over Cai in view of Gobran, and further in view of Zha et al. (U.S. Pub. No. 20240202458 A1).
As per claim 38, Cai further teaches: processing the plurality of chained prompts and response (par. 0058 chaining multiple prompts together) with a prompt-valuation generative Al model.
Cai and Gobran does not expressly disclose: to identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the plurality of chained prompts.
However, Zha teaches: to identify evaluation metrics and metric values for the evaluation metrics, the evaluation metrics and metric values being indicative of a performance of the plurality of … prompts (par. 0046 Prompt discovery 224 may implement prompt and NLP ML evaluation 430, in various embodiments, in order to evaluate performance of the candidate prompt(s) and candidate NLP ML model(s) … In some embodiments, test data 435 may be maintained by machine learning service 210 … When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data 435. Results 434 for candidate prompts may be collected. In some embodiments, prompt and NLP ML evaluation 430 may perform an initial analysis by, for example, comparing candidate prompt results 434 sample output 413. Again, a similarity score with sample output 413 may be generated to determine how well candidate prompts and candidate NLP ML models performed, and used to rank or filter out candidate prompt(s) and models).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of evaluating performance of prompts and ML models of Zha with the system and method of Cai and Gobran resulting in a transparent and controllable human-AI interaction system and method that provides evaluating/identifying the performance of chained prompts and models as in Zha. A person of ordinary skill in the art would have been motivated to make this combination for the purpose of providing for discovering prompts for different NLP tasks can help to optimize the performance of an NLP task for integration with a particular application. (par. 0027). Further, the combination would enhance the capabilities of systems, services, or applications to better interact with human users (par. 0020).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Willy W. Huaracha whose telephone number is (571) 270-5510. The examiner can normally be reached on M-F 8:30-5:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached on (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WH/
Examiner, Art Unit 2195
/BRADLEY A TEETS/ Supervisory Patent Examiner, Art Unit 2197