Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending. Claims 1, 13, and 19 are independent.
This Application was published as U.S. 20250272509.
Apparent priority: 27 February 2024 (claiming priority to four provisionals)
35 U.S.C. 112(f) Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are: “orchestration and planning service” and “metadata framework” in Claims 1-12. These limitations are generic in the context of the art and don’t refer to any specific structure and only serve as placeholders for the structure that performs the associated function(s) without providing any information about what that structure is. MPEP 2181 I A says:
For a term to be considered a substitute for "means," and lack sufficient structure for performing the function, it must serve as a generic placeholder and thus not limit the scope of the claim to any specific manner or structure for performing the claimed function. It is important to remember that there are no absolutes in the determination of terms used as a substitute for "means" that serve as generic placeholders. The examiner must carefully consider the term in light of the specification and the commonly accepted meaning in the technological art. Every application will turn on its own facts.
Based on the ordinary skill in the art and description of functions of these components in the Specification, they refer to processors or a combination of processor and memory and possibly transducers such as microphones and displays or to a combination of software and hardware.
PLEASE NOTE: This is NOT a rejection. Please don’t address it as a rejection. If the Applicant does not agree with the INTERPRETATION, he may argue or amend to replace the terms interpreted under 112(f) with structural terms such as “microphone” or “processor” as appropriately supported by the Specification. In the alternative, he may let the interpretation stand if the intent was to include a means plus function limitation in the Claim.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Siebel (U.S. 20240202225) in view of Channapattan (U.S. 20250110976).
Regarding Claim 1, Siebel teaches:
1. A computing services environment comprising:
a database system storing a plurality of database records for a plurality of client organizations accessing computing services via the computing services environment, [Siebel, Figure 3, “datastore 318” and “DB engine 316.” Figure 4 shows “enterprise systems 404-1 …N” which teach the “client organizations” of the Claim. “[0052] … Enterprise systems 404 can include data flow and management of different processes (e.g., of one or more organizations) and can provide access to systems and users of the enterprise while preventing access from other systems and/or users. … references to enterprise information environments can also include enterprise systems, …. In various embodiments, functionality of the enterprise systems 404 may be performed by one or more servers (e.g., a cloud-based server) and/or other computing devices.” “[0048] In some embodiments, generative artificial intelligence models (e.g., large language models of an orchestrator) of the enterprise generative artificial intelligence system 402 can interact with agents (e.g., retrieval agents, retriever agents) to retrieve and process information from various data sources. For example, data sources can store data records and/or segments of data records which may be identified by the enterprise generative artificial intelligence system 402 based on embedding values (e.g., vector values associated with data records and/or segments). Data records can include tables, text, images, audio, video, code, application outputs (e.g., predictive analysis and/or other insights generated by artificial intelligence applications), and/or the like.”] the computing services including a conversational chat assistant; [Siebel, Figures 1 and 2 show the “input 102” and the “input layer 202” “[0032] The input layer 202 represents a layer of the enterprise generative artificial intelligence system architecture that receives an input (e.g., a query, complex input, instruction set, and/or the like) from a user or system. For example, an interface module of the enterprise generative artificial intelligence system may receive the input.” Figure 3, “chat 350” and “[0159] … For example, a previous conversation 864 (e.g., as part of a chat with a chat bot) may have included a conversation about France. …”]
an application server receiving user input for the conversational chat assistant via the Internet; [Siebel, Figure 3, “application hosting and application engine 360.” “[0042] In some embodiments, a user query 362 and/or other inputs may be received by an application hosting an application engine 360 which can communicate with a low latency engine 358 to provide the input, or a transformed input, to the orchestrator 342. The orchestrator 342 may utilize the various agents, large language models, and other features to generate an accurate and reliable (e.g., without hallucination) answer to the user query 362.”]
a generative language model interface providing access to one or more generative language models; [Siebel, Figure 3, “[0041] In the example of FIG. 3, the enterprise generative artificial intelligence system includes an orchestrator 342 with a fine-tuned large language model. The orchestrator 342 and/or agents 326-339 may include and/or access task-specific large language models 348-356, as well as external or third-party large language models 340 in some embodiments….”]
an orchestration and planning service configured to identify a plurality of actions based on the user input and to execute the plurality of actions to determine a natural language response message; [Siebel. The “actions” of the Claim are mapped to the “tools” or “tasks” of Siebel that are performed by the various “agents.” “[0023] … Agents can include one or more multimodal models (e.g., large language models) to accomplish the prescribed tasks using a variety of different tools. Different agents can use various tools to execute and process unstructured data retrieval requests, structured data retrieval requests, API calls (e.g., for accessing artificial intelligence application insights), and the like. Tools can include one or more specific functions and/or machine learning models to accomplish a given task (or set of tasks).” Figure 6, “orchestrator 604” receiving the “user query 602” and “choosing tool 608” / “plurality of actions” to orchestrate the query and then finally generate the “summary 620” / “natural language response message” to be output as the response at 632. Figure 3, 342. “[0041] In the example of FIG. 3, the enterprise generative artificial intelligence system includes an orchestrator 342 with a fine-tuned large language model.…” “[0025] The orchestrator manages the agents to efficiently process disparate inputs or different portions of an input. For example, an input may require the system to access and retrieve data records from disparate data sources (e.g., unstructured datastores, structured datastores, timeseries datastores, and the like), database tables from different types of databases, and machine learning insights from different machine learning applications. The different agents can each separately, and in parallel, handle each of these requests, greatly increasing computational efficiency.”]
a metadata framework specifying information related to the conversational chat assistant, the metadata framework including a definition associated with an action of the plurality of actions, the definition including one or more inputs, one or more outputs, and one or more operations performed via the computing services environment; and [Siebel, Figure 3, 318: Metadata Store. “[0039] … The datastores 318 can include vector datastores (e.g., FAISS implementation), metadata datastores ….” “[0134] The system may leverage characteristics of a model driven architecture, which represent system objects (e.g., components, functionality, data, etc.) using rich and descriptive metadata, to dynamically generate queries for conducting searches across a wide range of data domains (e.g., documents, tabular data, insights derived from AI applications, web content, or other data sources)….”]
a communication interface configured to transmit the natural language response message to a client machine via the application server. [Siebel, Figure 4, “communication network 408” / “communication interface” operating between servers and clients and the generative AI system 402: “[0052] The enterprise systems 404 can include enterprise applications (e.g., artificial intelligence applications), enterprise datastores, client systems, and/or other systems of an enterprise information environment. As used herein, an enterprise information environment can include one or more networks (e.g., cloud, on premise, air-gapped or otherwise) of enterprise systems (e.g., enterprise applications, enterprise datastores), client systems (e.g., computing systems for access enterprise systems)….”]
Siebel is arguably a 102 reference.
Siebel does not include the term “client organization” expressly.
Channapattan teaches:
a database system storing a plurality of database records for a plurality of client organizations accessing computing services via the computing services environment, the computing services including a conversational chat assistant; [Channapattan, “Methods, systems, and devices for processing a natural language request are described. An identity management system may receive a user request for information maintained in the identity management system and related to a client organization. The request may be received in a natural language form. In response to the user request, a machine learning model may be employed to generate a query in a machine-readable language that is understandable by the identity management system…. Based on receiving a selection of a portion of the information output for display, the machine learning model may be employed to generate a natural language explanation of the selected portion. In some cases, the natural language explanation may be a summarization of information associated with the selected portion and retrieved from multiple data sources.” Abstract. “[0065] The described techniques provide a simplified and streamlined way for administrators of client organizations to conveniently access information maintained in the identity management system about their client organization, thereby improving the user experience and ensuring that the client organizations are able to access the most accurate and up-to-date information. Further, by employing a machine learning model to generate machine-readable queries to retrieve information responsive to a user's query, the identity management system may avoid receiving queries written and executed by administrators who may issue queries that are poorly written or not optimized for the identity management system's databases. Executing such queries may result in long running queries that degrade performance at the identity management system.”]
Siebel and Channapattan pertain to the use of natural language interface to query databases pertaining to data of various enterprises and it would have been obvious to combine the express mention of Channapattan to client organizations having their own databases with the system of Siebel which does include implied and indirect teachings of client organizations for a more solid teaching. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 2, Siebel teaches:
2. The computing services environment recited in claim 1, wherein an action of the plurality of actions comprises retrieving one or more database records from the database system, the one or more database records being associated with a client organization of the plurality of client organizations. [Siebel, “tools 108” and “tasks” of Siebel teach the “action” of the Claim and the performance of actions/tasks involve retrieving data records in Siebel: “[0025] … For example, an input may require the system to access and retrieve data records from disparate data sources ….” “[0030] The agents 106 can select the appropriate tools 108 to accomplish a set of prescribed tasks (e.g., tasks prescribed by the orchestrator). The tools 108 can make the appropriate function calls to retrieve disparate data records among other functions….” “[0030] The agents 106 can select the appropriate tools 108 to accomplish a set of prescribed tasks (e.g., tasks prescribed by the orchestrator). The tools 108 can make the appropriate function calls to retrieve disparate data records among other functions….”]
Regarding Claim 3, Siebel teaches:
3. The computing services environment recited in claim 2,
wherein an action of the plurality of actions comprises generating a summary of the one or more database records via a generative language model, and [Siebel, Figure 3, “Task Specific Fine Tuned LLMs 346” include the “Summarization 352.” Figure 6, “Query DB for relevant Docs 614” and “Query DB for relevant data 622.” Both paths end in “summary 620” and “visualization summary 630.”]
wherein the natural language response message includes the summary. [Siebel, Figure 6, “User Query 602” as input and “Summary of Tool Outputs as Response 632” as output. “[0026] Agents can process the disparate data returned by the different agents and/or tools. For example, large language models typically receive inputs in natural language format. The agents may receive information in a non-natural language format (e.g., database table, image, audio) from a tool and transform it into natural language describing the tool output in a format understood by large language models. A large language model can then process that input to “answer,” or otherwise satisfy the initial input.”]
Regarding Claim 4, The “actions” of the Claim were mapped to the “tools” or “tasks” of Siebel that are performed by the various “agents.” [0023] … Agents can include one or more multimodal models (e.g., large language models) to accomplish the prescribed tasks using a variety of different tools. Different agents can use various tools to execute and process unstructured data retrieval requests, structured data retrieval requests, API calls (e.g., for accessing artificial intelligence application insights), and the like. Tools can include one or more specific functions and/or machine learning models to accomplish a given task (or set of tasks).”
Siebel does not expressly teach storing information in databases.
Channapattan teaches:
4. The computing services environment recited in claim 1,
wherein an action of the plurality of actions comprises storing information to the database system. [Channapattan teaches the storage of several types of information including the generated output: “[0084] … The anonymization module 250 may further cache, or otherwise persist or store, the identified personally-identifiable information. Such techniques may prevent or reduce a likelihood of having personal or sensitive information injected into the machine learning model….” “[0086 ]… The generative AI module 260 may select the one or more prompts from a prompt store. The prompt store may be maintained in the database 290. The generative AI module 260 may select the one or more prompts based on the user query, such as based on a determined intent of the user query….” “[0126] The output module 715 may manage output signals for the device 705. For example, the output module 715 may receive signals from other components of the device 705, such as the software module 720, and may transmit these signals to other components or devices. In some examples, the output module 715 may transmit output signals for display in a user interface, for storage in a database or data store, for further processing at a server or server cluster, or for any other processes at any number of devices or systems. In some cases, the output module 715 may be a component of an I/O controller 910 as described with reference to FIG. 9.” “[0172] The database controller 915 may manage data storage and processing in a database 935. In some cases, a user may interact with the database controller 915. In other cases, the database controller 915 may operate automatically without user interaction. The database 935 may be an example of a single database, a distributed database, multiple distributed databases, a data store, a data lake, or an emergency backup database.”]
Siebel and Channapattan pertain to the use of natural language interface to query databases and it would have been obvious to add the storing of the output to databases of the system, as taught by Channapattan, to the system of Siebel as one additional use. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 5, Siebel teaches and the teaching suggests:
5. The computing services environment recited in claim 1,
wherein an action of the plurality of actions comprises authenticating a user account associated with the user input. [Seibel, Figure 5, “enterprise access control module 514” evaluated authorization for access which suggests authenticating the user. “[0120] In some implementations, the enterprise access control module 514 may evaluate (e.g., using access control lists) whether a user is authorized to access all or only a portion of a result (e.g., answer). For example, a user can provide a query associated with a first department or sub-unit of an organization. Members of that department or sub-unit may be restricted from accessing certain pieces of data, types of data, data models, or other aspects of a data domain in which a search is to be performed….”] (Authentication means verifying that a user is actually who he says he is. Evaluating authorization for access is not the same as authentication because it can be done by other means but does include and therefore does suggest authentication.)
Regarding Claim 6, Siebel teaches:
6. The computing services environment recited in claim 1, further comprising a conversational chat studio configured to customize the conversational chat assistant based on graphical user input provided via a graphical user interface. [Siebel, Figure 2, “dashboard agent 226” generates a GUI “[0036] The dashboard agent 226 may be configured to generate one or more visualizations and/or graphical user interfaces, such as dashboards….” “[0139] In some embodiments, the interface module 528 can function to generate graphical user interface components (e.g., server-side graphical user interface components) that can be rendered as complete graphical user interfaces on the enterprise generative artificial intelligence system 402 and/or other systems. For example, the interface module 528 can function to present an interactive graphical user interface for displaying and receiving information.” Siebel does use a conversational chat 350 as one of its “task-specific fine-tuned LLMs 346” and also includes a “visualization agent 334” and a “visualization tool 628” which generates a “visualization summary 630” which is a graphical summary of the tables and results.]
Regarding Claim 7, Siebel teaches:
7. The computing services environment recited in claim 1,
wherein the conversational chat assistant is one of a plurality of conversational chat assistants accessible via the computing services environment, and [Siebel, Figure 3, the “task-specific fined tuned LLMs 346” are all conversational because they take in natural language as input and provide natural language as output and as shown in Figure 3, each has a particular skill.]
wherein the conversational chat assistant is specific to a client organization of the plurality of client organizations. [Siebel, Figure 4, “[0052] … Enterprise systems 404 can include data flow and management of different processes (e.g., of one or more organizations) and can provide access to systems and users of the enterprise while preventing access from other systems and/or users….” “[0053] The external systems 406 can include applications, datastores, and systems that are external to the enterprise information environment. In one example, the enterprise systems 404 may be a part of an enterprise information environment of an organization that cannot be accessed by users or systems outside that enterprise information environment and/or organization….”]
Regarding Claim 8, Siebel teaches:
8. The computing services environment recited in claim 1,
wherein the user input includes natural language user input, and [Siebel, Figures 1, 2, 3, or 6 the input 102/204/360/602 from the user is in natural language. “[0027] FIG. 1 depicts a diagram 100 of an example logical flow of an enterprise generative artificial intelligence system according to some embodiments. As shown, an initial input 102 is received by the system from either a user (e.g., a natural language input) or another system (e.g., a machine-readable input).”]
wherein identifying the plurality of actions comprises: [Siebel, the actions are tasks/tools/agents or LLMs of Siebel.]
determining an intent identification input prompt that includes the natural language user input and one or more natural language instructions executable by the generative language model to identify the plurality of actions; [Siebel, Figures 8A, 8B, and 8C and Figure 9 show the input of the natural language query by the user and then the rewrites of the query by the aid of other LLMs to generate a final query/prompt that is input to the final LLM that generates the answer. Figure 8A shows the generation of the Prompts and update of the Prompts for input to the LLMs 808 and 810. The Prompts include the “user query 802” / “natural language user input” and whatever is added by the “retrieval model 804.” “[0157] In step 832, a query is received. The query 832 is executed against a vector store 832 and relevant passages 836 are retrieved. … The query 832 and the passages 836 provided to a large language model 838 which can create an extract 840 (e.g., a summary of the passage) for each passages. The extracts are combined (e.g., concatenated) in step 842 and provided to a large language model 844 along with the query 832. … The large language model 844 can generate a final response based on the query 832 and the combined extracts 842. ….” “[0047] … The enterprise generative artificial intelligence system 402 can include a human computer interface for receiving natural language queries and presenting relevant information with predictive analysis from the enterprise information environment in response to the queries. For example, the enterprise generative artificial intelligence system 402 can understand the language, intent, and/or context of a user natural language query. The enterprise generative artificial intelligence system 402 can execute the user natural language query to discern relevant information from an enterprise information environment to present to the human computer interface (e.g., in the form of an “answer”).”]
transmitting the intent identification input prompt to the generative language model for completion; [Siebel, Figure 3, the input to any of the multiple types of LLMs in Figure 3 teaches this limitation. There are “external LLMs 340,” then the “task specific fine tuned LLMs 346. Figure 8A, the input of the prompt to “LLM 808.”]
receiving an intent identification prompt completion from the generative language model; and [Siebel, Figure 10, 1004: “[0174] In step 1004, the enterprise generative artificial intelligence system selects, by the orchestrator based on the processed input, a first agent of a plurality of different agents….In some embodiments, the first agent includes one or more second large language models (e.g., a large language model different from the first large language model).” Selection of an agent means that the intent has been identified = “intent identification prompt completion.”]
identifying the plurality of actions by parsing the intent identification prompt completion. [Siebel, “[0180] In some embodiments, the orchestrator parses the input into different portions (e.g., segments) and routes each portion to a respect agent….”]
Regarding Claim 9, Siebel teaches:
9. The computing services environment recited in claim 8,
wherein the intent identification input prompt identifies a plurality of predetermined actions executable by the computing services environment, [Siebel, Figure 10, 1004 and 1008 where the LLMs and agents are selected. “[0180] In some embodiments, the orchestrator parses the input into different portions (e.g., segments) and routes each portion to a respect agent….” Figure 11, 1106: selecting a first agent from a plurality of different agents.
wherein the plurality of actions are a subset of the plurality of predetermined actions, and [Siebel, the plurality of different Agents and LLMs is predetermined and not indefinite. The list is shown in Figure 1 or 3.]
wherein the plurality of actions are identified in the intent identification prompt completion. [Siebel, Figure 11, 1106, 1110 each identifies different LLMs for performance of the tasks.]
Claim 13 is a method claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale.
Claim 14 is a method claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale.
Claim 15 is a method claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale.
Claim 16 is a method claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale.
Claim 17 is a method claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale.
Claim 18 is a method claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale.
Claim 19 is a computer program product system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale.
Claim 20 is a computer program product system claim with limitations corresponding to the limitations of Claims 2 and 3 and is rejected under similar rationale.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Siebel and Channapattan in view of Van Hoof (U.S. 20180075131).
Regarding Claim 10, Seibel teaches:
10. The computing services environment recited in claim 9,
wherein the plurality of predetermined actions are each associated with a respective unique identifier and a respective action description in the intent identification input prompt, and [Seibel teaches that the input natural language query of Figures 1-3 provides a description of the task/action that is requested by the user. “[0028] An orchestrator agent (or, simply, orchestrator) can pre-process the input in step 104. Pre-processing can include, for example, acronym handling, translation handling, punctuation handling, input identification (e.g., identifying different portions of the input 102 for processing by different agents). The orchestrator can use a multimodal model (e.g., large language model) to further process the input 102 to create a plan for determining a result (step 112) for the input. The plan may include a prescribed set of tasks, such as structured data retrieval tasks, unstructured data retrieval tasks, timeseries processing tasks, visualization tasks, and the like. In some embodiments, the plan can designate which tools 108 should be used to execute the tasks, and the orchestrator can select the agents based on the designated tools. In some embodiments, the plan can designate which agents should be used to execute the tasks, and the agents can independently designate which tools 108 should be used to execute the tasks.”]
wherein the plurality of actions are identified in the intent identification prompt completion via the respective unique identifiers.
Seibel does mention the preservation of privacy of the data: “[0052] … For example, enterprise systems 404 can include access and privacy controls….”
Channapattan mentions privacy concerns and anonymizing the PII by use of placeholders which is quite close: “[0111] … To accomplish this, the generative AI training system 435 may convert a data schema of an identity management system database, such as database 490, to a public data schema by enforcing one or more data privacy and security policies….” Figure 6: “[0117] At step 9, the query service 620 may anonymize the user query by removing personally-identifiable information or other sensitive information.” “[0121] At step 16, the query service 620 may de-anonymize the model-generated response by adding back any personally-identifiable or sensitive information that was removed at step 9.” “[0149] In some examples, to support pre-processing, the pre-processing component 850 may be configured to support parsing the natural language user query to identify personally-identifiable information. In some examples, to support pre-processing, the pre-processing component 850 may be configured to support replacing the personally-identifiable information with a placeholder value. In some examples, to support pre-processing, the pre-processing component 850 may be configured to support caching the personally-identifiable information.”
Neither expressly teaches the use of “unique identifiers” which according to the Specification of the instant Application are used for masking the personally identifiable information of users ([0070]).
Van Hoof teaches:
wherein the plurality of predetermined actions are each associated with a respective unique identifier and a respective action description in the intent identification input prompt, and [Van Hoof, Figure 2 shows a Query 214 and a Conversation Identifier 216 are input to the main NLP 220. The Query 214 would include the “action description” of the Claim because the “intents 240” are extracted from it. Additionally, the “Masked User ID 250” indicates that a “unique identifier” is associated with the query that masks the identity of the users. “[0059] The masked user identifier (250) can be masked (altered to hide the underlying identifier from which the masked user identifier (250) is derived) to protect privacy of a corresponding user's information. For example, a masked user identifier (250) for a user profile (224) can be a randomly generated globally unique identifier associated with the user profile (224) (and possibly also associated with a particular extension (230), so that the globally unique identifier is specific to the extension (230) and specific to the user profile (224)) by the main natural language processor (220)….” See Figures 3 and 4 which show the receiving of the natural language query 310/410 and generation of intent of the query 320/420 and the remainder of the steps that lead to the response at the last step.]
wherein the plurality of actions are identified in the intent identification prompt completion via the respective unique identifiers. [Van Hoof while masking the user identity by the “unique identifier” / “masked user ID 250” still keeps the processing (plurality of actions and intents of the Claim) associated with the now masked user identity. “[0059] … However, the conversation query processor (234) can still track the masked user identifier (250) to provide personalized processing and responses for particular user profiles, such as by tracking preferences of particular user profiles that have previously used the conversation query processor (234).”]
Siebel/Channapattan and Van Hoof pertain to the use of natural language interface to query databases and it would have been obvious to combine the unique identifier masking of Van Hoof with the system of combination for preservation of privacy of the user’s data and information. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Siebel and Channapattan in view of Mondlock (U.S. 12079570) and further in view of Gardner (U.S. 20260010738).
Regarding Claim 11, Seibel teaches the natural language query input but not a topic identification input. Channapattan teaches parsing the query to obtain PII or malicious content but not to identify topics. (See Figure 6 of the instant Application for support for this Claim.) teaches:
11. The computing services environment recited in claim 8, wherein identifying the plurality of actions comprises:
determining a topic identification input prompt that includes the natural language user input and a second one or more natural language instructions executable by the generative language model to identify a topic based on the natural language user input; [Mondlock, Figure 3B, “receive user query 320.” Figure 3D, “Topic Modeling of Query 362A,B,C.” “In some aspects, the RAG pipeline 300 may include at blocks 362A, 362B, and/or 362C performing topic modeling of the user query. The topic modeling may be performed by the relevant information identification module 130 or any other suitable program. The topic modeling may generate one or more topic keywords from the user query.” 15:4-10.]
transmitting the topic identification input prompt to the generative language model for completion; [Mondlock, Figure 3D: “In some aspects, the RAG pipeline 300 may include at blocks 370A and/or 370B performing a keyword search of the retrieved documents and/or retrieved assets using the topic keywords. The keyword search may be performed by the relevant information identification module 130 or any other suitable program. The keyword search may select one or more documents or assets that include one or more topic keywords.” 14:10-17. The RAG pipeline is for input to LLM: “…One solution to this problem is retrieval-augmented generation (RAG). RAG supplements user queries with relevant supplied data to enable LLMs to provide improved responses. ….” 1:13-24. “The following relates to packaging customizable generative AI pipelines, such as RAG pipelines. A generative AI pipeline includes the necessary software components for receiving a user query, fetching relevant external data, and submitting a prompt to cause the LLM to answer the user query based on provided relevant external data.” 1:30-40.]
receiving an topic identification input prompt completion from the generative language model; and
identifying a topic of a plurality of topics by parsing the intent identification prompt completion,
wherein each of the plurality of topics corresponds with a respective topic-based subset of the plurality of actions. [Mondlock, Figures 3A-3F and particularly Figure 3D where the query undergoes topic modeling of the query to extract keywords that are used at 374 to reach experts that are relevant to the topic searched by the user for conducting the task/action and providing the response. “The LLM service 170 may be owned or operated by an LLM provider. The LLM service 170 may include an LLM model. An LLM is a type of artificial intelligence (AI) algorithm that uses deep learning techniques to perform a number of natural language processing (NLP) tasks, such as understanding, summarizing, generating, and/or predicting new content.” 5: 14-20. “In some aspects, the RAG pipeline 300 may include at block 374 outputting the relevant experts to the user. The relevant experts may be outputted by the input/output module 120 or any other suitable program. The relevant expert output may include names, contact information, and/or links to, copies of, or summaries of the expert biographies. In some aspects, the relevant experts may be used as input to further identify relevant documents or assets to respond to a user query, as discussed elsewhere herein.” 15:34-42.]
Siebel/Channapattan and Mondlock pertain to the use of natural language interface to query databases and it would have been obvious to combine the topic identification of Mondlock with the system of combination for use in the conducting of the query and output of the proper response. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
The above references do not teach that the topics are obtained by parsing a prompt that is received from an LLM.
Gardner teaches:
receiving an topic identification input prompt completion from the generative language model; and [Gardner, Figure 1 showing the “LLM and/or AI Machine 111” which is in communication with “data source machines 113” and “client devices 110” and “database servers 124” that have access to “databases including content items 128.” Figure 2 including the “prompt engineering module 220.” Figure 3, automatically engineer a prompt for an LLM 306.” ‘[0065] The system can construct prompts tailored to each model's capabilities and combine outputs for an ensemble summarization approach. The prompts provide the text along with instructions designed to elicit the desired analysis from each model.” “[0066] For example, a single prompt could provide the text to the LLM for overall summarization, ask QA models for key details, have classifiers tag topics, retrievers augment with external data, and sentiment analysis score tone.” “[0074] … Pre-trained semantic search engines can help identify contextual text passages for a given topic.” “[0148] ..Templates may determine optimal sentence positioning based on relationships between entities, topics, and other semantic factors.” “[0155] In example embodiments, the prompt engineering model utilizes a transformer-based neural network architecture. It employs an encoder-decoder structure. The encoder ingests features extracted from the input content item, like part-of-speech tags, named entities, sentiment scores, topic vectors, and the target abstraction level. The decoder model generates an optimized prompt sequence conditioned on the encoder outputs.” “[0336] Topic modeling discovers high-level themes;”]
identifying a topic of a plurality of topics by parsing the intent identification prompt completion, [Gardner, Figure 3, See the example of [0537]-[0542] and “[0561] The Query API allows programmatically asking questions to expand summary coverage of key topics:…” “[0577] A research paper recommendation engine can summarize collections of papers on topics of interest to a scientist to surface relevant new findings.” See also [0727], [0733], [0746].]
Siebel/Channapattan/Mondlock and Gadner pertain to the use of natural language interface to query databases and it would have been obvious to combine the prompt engineering of Gardner including its topic identification with the system of combination for generating the prompt that can be usable by the LLMs of these references and to output of the proper response. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 12, Siebel teaches (the top three limitations of this Claim are the same as the 3 limitations of Claim 9 and only the last limitation is additional and differs):
12. The computing services environment recited in claim 11,
wherein the intent identification input prompt identifies a plurality of predetermined actions executable by the computing services environment, [Siebel, see mapping for Claim 9.]
wherein the plurality of actions are a subset of the plurality of predetermined actions, [Siebel, see mapping for Claim 9.]
wherein the plurality of actions are identified in the intent identification prompt completion, and [Siebel, see mapping for Claim 9.]
wherein the plurality of predetermined actions are those corresponding with the identified topic. [This is the limitation where Claim differs from Claim 9]
Siebel differentiates between agents, tasks, and tools all of which are topic-dependent. But it does not include an express mention to topic. Neither does Channapattan.
Mondlock teaches:
wherein the plurality of predetermined actions are those corresponding with the identified topic. [Mondlock, Figures 3A-3F and particularly Figure 3D where the query undergoes topic modeling of the query to extract keywords that are used at 374 to reach experts that are relevant to the topic searched by the user for conducting the task/action and providing the response. “The LLM service 170 may be owned or operated by an LLM provider. The LLM service 170 may include an LLM model. An LLM is a type of artificial intelligence (AI) algorithm that uses deep learning techniques to perform a number of natural language processing (NLP) tasks, such as understanding, summarizing, generating, and/or predicting new content.” 5: 14-20. “In some aspects, the RAG pipeline 300 may include at block 374 outputting the relevant experts to the user. The relevant experts may be outputted by the input/output module 120 or any other suitable program. The relevant expert output may include names, contact information, and/or links to, copies of, or summaries of the expert biographies. In some aspects, the relevant experts may be used as input to further identify relevant documents or assets to respond to a user query, as discussed elsewhere herein.” 15:34-42.]
Siebel/Channapattan and Mondlock pertain to the use of natural language interface to query databases and it would have been obvious to combine the topic identification of Mondlock with the system of combination for use in the conducting of the query and output of the proper response. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached 9 to 5, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Fariba Sirjani/
Primary Examiner, Art Unit 2659