Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawing submitted on 06/05/2024 is considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims1 and 15-16, are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claim(s) recite(s), receiving data characterizing a first prompt from a user interface; generating data characterizing a second prompt, wherein the second prompt is
configured to generate a response from an artificial intelligence model that has a greater relevancy than a response from the artificial intelligence model generated by providing the first prompt to the artificial intelligence model; receiving data characterizing a response to the second prompt by providing the data characterizing the second prompt to an artificial intelligence based model; and providing the response to the second prompt in the user interface. The limitation, as drafted is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting “artificial intelligence model” and “artificial intelligence base model” nothing in the claim element precludes the step from practically being formed in the mind. For example, but for the “artificial intelligence model” and “artificial intelligence base model” language “receive”, “generating” and “providing” in the context of this claims encompasses a person verbally receiving from another person an initial request to find location information related to an address in a way that is not clear to the person from the other person initial request and upon rephrasing the user initial request into another request that the person understand clearly, the person provide the other person information for the request.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it fall with the “Mental Process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The judicial exception is not integrated into a practical application. In particular, the claim recites addition elements -receiving data characterizing a response to the second prompt by proving the data characterizing the second prompt to an artificial intelligence based model. The use of “artificial intelligence base model” to receive the response to the second prompt, is recited at a high-level of generality (i.e. receiving by a computing device a response/result to a text input/natural language request) such that it amounts no more than mere instructions to apply the exception using generic computer components.
Accordingly, this additional elements does not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving data characterizing a response to the second prompt by proving the data characterizing the second prompt to an artificial intelligence based model, steps amounts to no more than mere instructions to apply the exception using a generic computer component. With respect to use of a generic computer component to receive content or information, the courts have indicated that “finding the use of a generic server insufficient to add inventive concepts to an abstract idea” (See MPEP 2106.05 (a) I., Particular structure of a server that stores organized digital images, TLI Communications, 823 F.3d at 612, 118 USPQ2d at 1747). Also, court has indicated that “delivering broadcast content to a portable electronic device such as a cellular telephone, when claimed at a high level of generality” may not be sufficient to show an improvement to technology (See MPEP 2106.05 (a) Il., Affinity Labs of Tex. v. Amazon.com, 838 F.3d 1266, 1270, 120 USPQ2d 1210, 1213 (Fed. Cir. 2016); Affinity Labs of Tex. v. DirecTV, LLC, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016)).
Further the limitations receiving data characterizing a response to the second prompt by proving the data characterizing the second prompt to an artificial intelligence based model, provide nothing more than mere instructions to implement an abstract idea on a generic computer component. The use of artificial intelligence base model, as the claim recites, provide only the idea of a solution or outcome and fails to recite details of how a solution to a problem is accomplished. Without any description of the Al mechanism for accomplishing the result using generic processor, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it" (See MPEP 2106.05(f), “The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures | v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016): Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743.”). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
With respect to Claim 2, limitation “modifying the received data characterizing the first prompt based on at least one of a type of the artificial intelligence based model, a setting of the artificial intelligence based model, or a configuration for an enterprise in which the artificial intelligence based model is deployed” similarly, other than reciting “artificial intelligence base model” nothing in the claim element precludes the step from practically being formed in the mind. For example as explained in the claim 1, but for the “artificial intelligence base model” language “modifying”, in the context of this claims encompasses a person verbally receiving from another person an initial request to find location information related to an address in a way that is not clear to the person from the other person initial request and upon rephrasing the user initial request into another request that the person understand clearly, the person provide the other person information for the request.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it fall with the “Mental Process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of modifying the received data characterizing the first prompt based on at least one of a type of the artificial intelligence based model, steps amounts to no more than mere instructions to apply the exception using a generic computer component.
The use of artificial intelligence base model, as the claim recites, provide only the idea of a solution or outcome and fails to recite details of how a solution to a problem is accomplished. Without any description of the Al mechanism for accomplishing the result using generic processor, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it" (See MPEP 2106.05(f), “The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures | v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016): Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015).
Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
With respect to Claims 6-14, similar to claim 1, nothing in the claim element precludes the step from practically being formed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it fall with the “Mental Process” grouping of abstract ideas. Accordingly, the claims recites an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-10 and 14-16, are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Almaer et al.(US 2024/0362209 A1).
Regarding Claims 1 and 15-16, Almaer et al. teach: A method comprising: ([0023] In another aspect, the present application discloses a computing system. The computing system includes a processor and a memory coupled to the processor. The memory stores computer-executable instructions that, when executed by the processor, may cause the processor to: receive a request for retrieval of data satisfying one or more criteria, the request including at least one data request parameter; search a database storing example queries based on the request to identify at least one matching query; provide, to a large language model (LLM), an input prompt to generate a query purporting to retrieve data satisfying the one or more criteria, the input prompt including the at least one data request parameter and the at least one matching query as an example; and receive, from the LLM, a result including the generated query.): receiving data characterizing (constructed query) a first prompt (input prompt ) from a user interface ([0034] It is desired to provide a process for automatically generating queries for an endpoint that are based on user requests and that comply with the requirements of the endpoint. [0035] The system may match a user request (e.g., a data retrieval request) to a “best” prompt template, out of a set of such templates, for an LLM. A prompt template may, for example, comprise a previous query that was accepted by the endpoint or an example of a properly constructed query for the endpoint. The matched template may then be provided in an input prompt to the LLM with instructions to generate a query for submitting to the endpoint. [0077] In some implementations, the data request may be received via a user interface on the user device 120.); generating data characterizing (previous query is provided, along with the first data request ) a second prompt (modifying an input prompt to the LLM), wherein the second prompt is configured to generate a response from an artificial intelligence model (LLM) that has a greater relevancy (similarity or embeddings associated with all or a subset (e.g., only correctly formed queries) of previous data) than a response from the artificial intelligence model generated by providing the first prompt to the artificial intelligence model; receiving data characterizing a response (a result including the generated query) to the second prompt by providing the data characterizing the second prompt to an artificial intelligence based model (an API endpoint associated with a third-party server) ([0028] In the present application, the term “generative AI model” may be used to describe a machine learning model. A generative AI model may sometimes be referred to, or may use, a language learning model. [0030] An endpoint may implement (or expose) a software interface, such as an application programming interface (API), for offering various services to other computer programs. An API contains and is implemented by function calls, which are language statements that request software to perform particular actions and services. The specification of the API describes functions and other parameters that are supported by the API. The data requests (e.g., queries) directed at an endpoint are required to be compliant with various requirements associated with the endpoint. In particular, query statements for an endpoint must be structured to comply with both the syntax of a query language and any requirements which may be stipulated for queries that are supported by an API (or other software interface) for the endpoint.[0041] This process of instructing the LLM to generate a query corresponding to the first data request based on modifying an input prompt to the LLM may proceed iteratively until a successful response is received from the endpoint. [0066] Because GPT-type language models tend to have a large number of parameters, these language models may be considered LLMs. An example GPT-type LLM is GPT-3. GPT-3 is a type of GPT language model that has been trained (in an unsupervised manner) on a large corpus derived from documents available to the public online. [0078] The generated query may then be transmitted to the endpoint 140 via the network 150. For example, a communications module 118 of the code generation engine 114 may be configured to transmit the generated query to an API endpoint associated with a third-party server from which a resource is desired to be retrieved. [0094] In operation 208, the computing system receives, from the LLM, a result including the generated query.); and providing the response to the second prompt in the user interface ([0094] The generated query may be provided to the user device as a response to the first user request. That is, the computing system may output the generated query responsive to receiving the first user request via the user device. [0101] In operation 312, the computing system receives, from the LLM, a result including the generated query. The result may indicate information about the generated query, such as the query language, data fields, arguments, etc. The generated query may be provided to the user device as a response to the first user request. ).
Regarding Claim 2, Almaer et al. teach: The method of claim 1, wherein generating the data characterizing the second prompt further comprises: modifying the received data characterizing the first prompt based on at least one of a type of the artificial intelligence based model, a setting of the artificial intelligence based model, or a configuration for an enterprise in which the artificial intelligence based model is deployed (See rejection of claim 1 specifically [0028] In the present application, the term “generative AI model” may be used to describe a machine learning model. A generative AI model may sometimes be referred to, or may use, a language learning model. [0066] Because GPT-type language models tend to have a large number of parameters, these language models may be considered LLMs. An example GPT-type LLM is GPT-3. GPT-3 is a type of GPT language model that has been trained (in an unsupervised manner) on a large corpus derived from documents available to the public online. [0030] In particular, query statements for an endpoint must be structured to comply with both the syntax of a query language and any requirements which may be stipulated for queries that are supported by an API (or other software interface) for the endpoint. [0041] This process of instructing the LLM to generate a query corresponding to the first data request based on modifying an input prompt to the LLM may proceed iteratively until a successful response is received from the endpoint.).
Regarding Claim 3, Almaer et al. teach: The method of claim 2, wherein the type of the artificial intelligence based model comprises at least one of a foundational model, a multimodal model, a reinforcement learning model, a transfer learning model, or a large language model (LLM) (See rejection of claim 1 and [0028] In the present application, the term “generative AI model” may be used to describe a machine learning model. A generative AI model may sometimes be referred to, or may use, a language learning model. [0066] Because GPT-type language models tend to have a large number of parameters, these language models may be considered LLMs. An example GPT-type LLM is GPT-3. GPT-3 is a type of GPT language model that has been trained (in an unsupervised manner) on a large corpus derived from documents available to the public online.) .
Regarding Claim 4, Almaer et al. teach: The method of claim 2, wherein the setting of the artificial intelligence based model comprises at least one of a temperature, a frequency penalty, a top P- value, or a top K-value (See rejection of claim 2 and [0073] The API call may also include an identification of the language model or LLM to be accessed and/or parameters for adjusting outputs generated by the language model or LLM, such as, for example, one or more of a temperature parameter (which may control the amount of randomness or “creativity” of the generated output) (and/or, more generally some form of random seed as serves to introduce variability or variety into the output of the LLM), a minimum length of the output (e.g., a minimum of 10 tokens) and/or a maximum length of the output (e.g., a maximum of 1000 tokens), a frequency penalty parameter (e.g., a parameter which may lower the likelihood of subsequently outputting a word based on the number of times that word has already been output), a “best of” parameter (e.g., a parameter to control the number of times the model will use to generate output after being instructed to, e.g., produce several outputs based on slightly varied inputs). The prompt generated by the computing system is provided to the language model or LLM and the output (e.g., token sequence) generated by the language model or LLM is communicated back to the computing system.).
Regarding Claim 5, Almaer et al. teach: The method of claim 2, wherein the configuration for the enterprise comprises at least one of language preferences (“best” prompt template), or data masking preferences (See rejection of claim 2, specifically [0028] In some cases, this may include a prompt template. A prompt template may specify that prompts have a certain structure or constrained intents, or that acceptable prompts exclude certain classes of subject matter or intent, such as the production of results or outputs that are violent, pornographic, etc. [0035] The present application discloses improved techniques of generating code for interacting with an endpoint. A system and methods for producing automatically-generated queries using an LLM are described. More particularly, the proposed system is designed to find optimal example(s) of acceptable code for an endpoint that can be used to facilitate query generation. The system may match a user request (e.g., a data retrieval request) to an out of a set of such templates, for an LLM. A prompt template may, for example, comprise a previous query that was accepted by the endpoint or an example of a properly constructed query for the endpoint. The matched template may then be provided in an input prompt to the LLM with instructions to generate a query for submitting to the endpoint.).
Regarding Claim 6, Almaer et al. teach: The method of claim 5, wherein the language preferences comprises tone, cadence, or narrative styles (template may specify that prompts have a certain structure or constrained intents ) (See rejection of claim 5 and [0030] In particular, query statements for an endpoint must be structured to comply with both the syntax of a query language and any requirements which may be stipulated for queries that are supported by an API (or other software interface) for the endpoint.).
Regarding Claim 7, Almaer et al. teach: The method of claim 2, wherein the configuration for the enterprise comprises enterprise specific data (See rejection of claim 2 and [0029] Significant advances have been made in recent years in generative AI models. Different implementations may be trained to create digital art, computer code, conversation text responses, or other types of outputs. Examples of generative AI models include Stable Diffusion by Stability AI Ltd., ChatGPT by OpenAI, DALL-E 2 by OpenAI, and GitHub CoPilot by GitHub and OpenAI. The models are typically trained using a large data set of training data. For instance, in the case of AI for generating images, the training data set may include a database of millions of images tagged with information regarding the contents, style, artist, context, or other data about the image or its manner of creation. The generative AI trained on such a data set is then able to take an input prompt in text form, which may include suggested topics, features, styles or other suggestions, and provide an output image that reflects, at least to some degree, the input prompt. [0066] ChatGPT is built on top of a GPT-type LLM, and has been fine-tuned with training datasets based on text-based chats (e.g., chatbot conversations). ChatGPT is designed for processing natural language, receiving chat-like inputs and generating chat-like outputs.).
Regarding Claim 8, Almaer et al. teach: The method of claim 7, wherein the enterprise specific data comprises at least one of sales expenditure, marketing expenditure, revenue, win rate, statistics, inventory levels, logistics datasets, collections metrics, or lead conversions (See rejection of claim 7 and [0051] For example, an ML model for generating natural language that has been trained generically on publicly-available text corpuses may be, e.g., fine-tuned by further training using the complete works of Shakespeare as training data samples (e.g., where the intended use of the ML model is generating a scene of a play or other textual content in the style of Shakespeare). [0066] ChatGPT is built on top of a GPT-type LLM, and has been fine-tuned with training datasets based on text-based chats (e.g., chatbot conversations). ChatGPT is designed for processing natural language, receiving chat-like inputs and generating chat-like outputs. [0067] A computing system may access a remote language model (e.g., a cloud-based language model), such as ChatGPT or GPT-3, via a software interface (e.g., an application programming interface (API)). Additionally, or alternatively, such a remote language model may be accessed via a network such as, for example, the Internet. In some implementations such as, for example, potentially in the case of a cloud-based language model, a remote language model may be hosted by a computer system as may include a plurality of cooperating (e.g., cooperating via a network) computer systems such as may be in, for example, a distributed arrangement. Notably, a remote language model may employ a plurality of processors (e.g., hardware processors such as, for example, processors of cooperating computer systems). Indeed, processing of inputs by an LLM may be computationally expensive/may involve a large number of operations (e.g., many instructions may be executed/large data structures may be accessed from memory) and providing output in a required timeframe (e.g., real-time or near real-time) may require the use of a plurality of processors/cooperating computing devices as discussed above.).
Regarding Claim 9, Almaer et al. teach: The method of claim 1, wherein the first prompt is provided by the user interface in natural language form (See rejection of claim 1 specifically [0036] When a user provides a first data request (expressed using natural language) for an endpoint, the system is configured to instruct an LLM to generate a query for the endpoint, i.e., by converting the first data request to a corresponding query. [0077] The data request may be expressed in natural language and include information identifying the requested resources. In some implementations, the data request may be received via a user interface on the user device 120.).
Regarding Claim 10, Almaer et al. teach: The method of claim 1, wherein the data corresponding to the response to the second prompt is provided to the user interface in natural language form (See rejection of claim 7 and [0066] GPT-3 has been trained as a generative model, meaning that it can process input text sequences to predictively generate a meaningful output text sequence. ChatGPT is built on top of a GPT-type LLM, and has been fine-tuned with training datasets based on text-based chats (e.g., chatbot conversations). ChatGPT is designed for processing natural language, receiving chat-like inputs and generating chat-like outputs.).
Regarding Claim 14, Almaer et al. teach: The method of claim 1, further comprising: selecting the artificial intelligence based model (an API endpoint associated with a third-party server) based on the second prompt (See rejection of claim 1 specifically, [0030] An endpoint may implement (or expose) a software interface, such as an application programming interface (API), for offering various services to other computer programs. An API contains and is implemented by function calls, which are language statements that request software to perform particular actions and services. The specification of the API describes functions and other parameters that are supported by the API. The data requests (e.g., queries) directed at an endpoint are required to be compliant with various requirements associated with the endpoint. In particular, query statements for an endpoint must be structured to comply with both the syntax of a query language and any requirements which may be stipulated for queries that are supported by an API (or other software interface) for the endpoint. [0041] This process of instructing the LLM to generate a query corresponding to the first data request based on modifying an input prompt to the LLM may proceed iteratively until a successful response is received from the endpoint. [0066] Because GPT-type language models tend to have a large number of parameters, these language models may be considered LLMs. An example GPT-type LLM is GPT-3. GPT-3 is a type of GPT language model that has been trained (in an unsupervised manner) on a large corpus derived from documents available to the public online. [0078] The generated query may then be transmitted to the endpoint 140 via the network 150. For example, a communications module 118 of the code generation engine 114 may be configured to transmit the generated query to an API endpoint associated with a third-party server from which a resource is desired to be retrieved.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Almaer et al. in view of Tsun et al.(US 2024/0296293 A1).
Regarding Claim 11, Almaer et al. teach: The method of claim 1, wherein generating the data characterizing the second prompt further comprises: receiving user historical data characteristics (query matching previous query from user); and generating, based on the historical data characteristics, a blueprint for modifying (previous query is provided, along with the first data request, as input to the LLM ) the first prompt into the second prompt (See rejection of claim 1 and [0035] The system may match a user request (e.g., a data retrieval request) to a “best” prompt template, out of a set of such templates, for an LLM. A prompt template may, for example, comprise a previous query that was accepted by the endpoint or an example of a properly constructed query for the endpoint. The matched template may then be provided in an input prompt to the LLM with instructions to generate a query for submitting to the endpoint. [0036] When a user provides a first data request (expressed using natural language) for an endpoint, the system is configured to instruct an LLM to generate a query for the endpoint, i.e., by converting the first data request to a corresponding query. [0038] More generally, the system identifies an embedding that matches (e.g., nearest neighbor or otherwise closest to) the first embedding, and retrieves a previous query (in the specified query language) that is associated with the identified embedding. [0039] The retrieved previous query is provided, along with the first data request, as input to the LLM. [0041] This process of instructing the LLM to generate a query corresponding to the first data request based on modifying an input prompt to the LLM may proceed iteratively until a successful response is received from the endpoint. [0042] When a user inputs a data request, an embedding associated with the data request may be generated, and compared to embeddings for previous data requests. [0047] Training an ML model refers to a process of learning the values of the parameters (or weights) of the neurons in the layers such that the ML model is able to model the target behavior to a desired degree of accuracy. Training typically requires the use of a training dataset, which is a set of data that is relevant to the target behavior of the ML model. For example, to train an ML model that is intended to model human language (also referred to as a language model), the training dataset may be a collection of text documents, referred to as a text corpus (or simply referred to as a corpus). The corpus may represent a language domain (e.g., a single language), a subject domain (e.g., scientific papers), and/or may encompass another domain or domains, be they larger or smaller than a single language or subject domain.).
Alamer et al. do not specifically teaches underlined limitation: receiving historical user behavior including historical data analysis characteristics; and generating, based on the historical data analysis characteristics, a blueprint for modifying the first prompt into the second prompt.
Tsun et al. teach: receiving historical user behavior including historical data analysis characteristics; and generating, based on the historical data analysis characteristics, a blueprint (a second subset of prompt inputs) for modifying the first prompt into the second prompt ([0044] In some embodiments, prompt generation component 160 determines the messaging intent based on historical activity data of the user of user system 110. For example, prompt generation component 160 determines that the messaging intent is to seek work if the user of user system 110 has recently applied to one or more jobs. [0046] In some embodiments, prompt generation component 160 maps a set of user attributes to a set of one or more prompt inputs using the connection. For example, prompt generation component 160 maps user attributes that are relevant based on the ranking of the connection between the user initiating the electronic messaging and the recipient of the electronic messaging. In some embodiments, prompt generation component 160 maps a shared attribute (e.g., college attended) of attribute data 104 to a prompt input of prompt 106 based on the connection (e.g., the fact that the message sender and message recipient attended the same college). [0048] In some embodiments, input generation component 164 creates an initial prompt using a first subset of prompt inputs of the set of prompt inputs mapped to the user attributes and updating the initial prompt to generate prompt 106 which includes a second subset of prompt inputs of the set of prompt inputs. [0103] In some embodiments, content generation system 100 extracts attribute data from a post based on the selected message intent option 1110. For example, in response to determining that a user is seeking work (e.g., either in response to a selection by the user or an inference by content generation system 100), content generation system 100 extracts attribute data from a post associated with a job that the user is interested in. In some embodiments, content generation system 100 extracts the attribute data from the post based on historical activity data of the user. For example, if the user has recently applied to a job and is now messaging the profile of the person and/or company that posted the job, content generation system 100 can infer that the user intends to talk about that job posting and extracts attribute data from the job posting to use in prompt generation.).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time of the invention was made for Almear et al. to include the teaching of Tsun et al. above in order to create an initial prompt using a first subset of prompt inputs of a set of prompt inputs mapped to a user attributes data based on historical activity data of the user and updating the initial prompt to generate prompt which includes a second subset of prompt inputs of the set of prompt inputs.
Regarding Claim 12: The method of claim 11, wherein the blueprint is at least partially automatically generated based on metadata (a set of user attributes to a set of one or more prompt inputs) (See Tsun et al. teaching in the rejection of claim 11.).
Regarding Claim 13: The method of claim 1, wherein generating data characterizing the second prompt is based at least on user feedback to historical provided prompt responses (See rejection of claim 11, and Tsun teaching: [0044] In some embodiments, prompt generation component 160 determines the messaging intent based on historical activity data of the user of user system 110. For example, prompt generation component 160 determines that the messaging intent is to seek work if the user of user system 110 has recently applied to one or more jobs. [0048] In some embodiments, input generation component 164 creates an initial prompt using a first subset of prompt inputs of the set of prompt inputs mapped to the user attributes and updating the initial prompt to generate prompt 106 which includes a second subset of prompt inputs of the set of prompt inputs. [0059] Prompt feedback component 168 is a component that receives suggestion 114 from deep learning model 108 and feedback 116 from user system 110 and uses them to generate future prompts.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The pertinent art of record Cheng et al. (US 2025/0036670 A1) teach: Large Language Models In Cloud Database Platforms.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-5878. The examiner can normally be reached Monday -Friday, EST (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2653