DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
All previous objections and rejections directed to the Applicant’s disclosure and claims not discussed in this Office Action have been withdrawn by the Examiner.
Response to Amendments and Arguments
The 101 rejections are maintained. The applicant’s amendments do not change the 101 interpretations. In their remarks and arguments, the applicant states that the office action does not fully consider the interactions between all elements of the claim (say, claim 1). However, the examiner asserts that the sequence of limitation in their entirety have been considered. This is illustrated by the example provided by the 101 analysis section of the non-final office action (and repeated below). The example shows step-by-step how each limitation could be conceivably performed by a human, thereby arriving at the same results as the claim. Secondly, the applicant states that the office action does not give sufficient weight to the evidence on record demonstrating that the claimed invention provides a technical solution to a technical problem. However, the examiner notes that techniques employed in the analysis of multi-turn query data are now well-known and conventional. In order to overcome the 101 rejections, the applicant would need to incorporate claim language which distinguish the analysis of the multi-turn query data.
With respect to the 103 rejections, the applicant’s arguments and amendments have been carefully considered, but they are not persuasive. The amendments to independent claims 1 and 14 do not change the interpretation of the applied prior art. Claim 1 has been amended to include “…indicative of an intent of the contextually aware query”, “…comprising a plurality of similar intents”, and “…and the plurality of similar intents”, none of which add meaningfully to the claim. See claim mapping below. The same applies to claim 14. Thus, the 103 rejections are maintained with the same references as applied in the non-final office action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite steps for query intent understanding and search result generation. The limitations of claims 1-13, as drafted, are a computer program product or system that, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting “computer readable storage medium”, “program instructions” “computer”, “processor”, and “memory” nothing in the claim element precludes the steps from practically being performed in the mind and/or with pen and paper calculations. As to claims 1 and 10, under the BRI, a human could receive a query from another human, as well as prior input queries associated with the current query. The human receiving the query could then consider (i.e., process) the query and additional details. Next, this human could mentally generate a cluster of the query and other related queries and lastly determine relevant search results. Furthermore, aside from being generic, the ML model does not actively perform any of these steps. The feeding step and the extracting step could just involve a human mentally considering a query via reading. There appear to be no technical specifics about how this ML model might be structured or carry out analysis of the and query data. Accordingly, the steps of claims 1 and 10 are directed to organizing human interactions and/or a mental process. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. As to claim 14, under the BRI, a human could receive a query from another human, as well as prior input queries associated with the current query. The human receiving the query could then consider (i.e., process) the query and additional details to determine a query intent. This human could, with pen and paper, plot the query embeddings for the associated queries and compute a loss function by measuring the difference between the query embeddings. Lastly, the human could adjust parameters to mitigate the difference between the query embeddings. Accordingly, the steps of claim 14 are directed to a mathematical concept. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a mathematical concept but for the recitation of generic computer components, then it falls within the “Mathematical Concept” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claims 1-20 only recite the additional elements “computer readable storage medium”, “program instructions” “computer”, “processor”, and “memory” to perform the aforementioned steps. The processor and other hardware are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function for transliterating text such that they amount to no more than mere instructions to apply the exception using generic computer components.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional hardware elements to perform both the aforementioned steps amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible.
A similar analysis applies to the dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 10-11, and 13 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US,20220277741 hereinafter referred to as Chaudhary et al., in view of US 20250131247, hereinafter referred to as Mondlock et al.
Regarding claim 1 (Currently Amended), Chaudhary et al. discloses computer-implemented method for generating a contextually aware query (“In other embodiments of the present disclosure, a method for generating a first environment based on configurations rules in a conversational interaction context is provided…Each environment is configured to determine a plurality of user intents based on a plurality of user queries associated with the corresponding domain,” Chaudhary et al., para [0006].), the method comprising:
obtaining, by a computing system comprising one or more computing devices, an input query (“For example, the workstation(s) 106 may transmit data related to user interactions (e.g., questions, queries) to conversational interaction computing device 102,” Chaudhary et al., para [0022].);
processing, by the computing system, the contextually aware query with a machine- learned embedding model to generate a query embedding (“The machine learning model may then use the clusters to predict user intents in real-time based on user interactions with the corresponding domain. Each training dataset for each corresponding domain may similarly be trained using the data processing rules, data embedding rules, and the training rules. The machine learning model(s) may then be deployed on corresponding domain(s) to accurately and efficiently predict user intents in real-time as new data (e.g., user query, user interaction, user question) is received from or at the domain(s). The output(s) of the trained machine learning model may then be used by conversational interaction computing device 102 to perform operations, such as but not limited to, provide query results (e.g., answers, reaction to the query, perform actions) in real-time or near real-time,” Chaudhary et al., para [0036]. The domain also provides context for the query.) indicative of an intent of the contextually aware querv (“Conversational interaction computing device 102 may apply training rules to the vector embeddings to train a machine learning model for the corresponding domain to determine user intents based on user interactions (e.g., queries, requests, questions, interactions) in real-time,” Chaudhary et al., para [0034]. And, “At step 506, a first environment is configured using the first dataset and the set of configuration rules to determine a result user intent based on a requested query associated with the first domain. The first environment embeds the plurality of first phrase-intent pairs based on the set of configuration rules ,” Chaudhary et al., para [0063].);
determining, by the computing system, a query embedding cluster associated with the query embedding, wherein the query embedding cluster is associated with a plurality of other embeddings associated with a plurality of other queries (Chaudhary et al., para [0036].) comprising a plurality of similar intents (Chaudhary et al., para [0035]-[0036].); and
determining, by the computing system, a plurality of search results based, at least, on the query embedding cluster (Chaudhary et al., para [0036].), and the plurality of similar intents (Chaudhary et al., para [0035]-[0036].).
Mondlock et al. is cited to disclose obtaining, by the computing system, multi-turn query data, wherein the multi-turn query data is descriptive of previous inputs obtained before the input query (“In some aspects, the computer-implemented method 700 may include at block 714 identifying a user chat session associated with the user query in order to determine context for the user query. The user chat session may be identified by the chat history module 124. The existence of the user chat session may be determined at block 324 of the RAG pipeline 300. A chat history associated with the user chat session may be fetched at block 328 of the RAG pipeline 300. The computer-implemented method 700 may include incorporating text from the chat history into the user query. Text from the chat history may be incorporated into the user query at block 342 of the RAG pipeline,” Mondlock et al., para [0138]. Here, the chat history serves as multi-turn query data. The applicant’s specification states that multi-turn query data may be descriptive of previous inputs obtained before the input query.), wherein the previous inputs and the input query are associated with a particular multi-turn session (“In some aspects, if there have been previous user queries in the current chat session, then the RAG pipeline 300 may include at block 342 rephrasing the user query based upon the chat history. The user query may be rephrased by the query module 128 or any other suitable program. Rephrasing the user query may include replacing pronouns in the user query with their antecedents from the chat history. Rephrasing the user query may include supplementing the user query with a summary of or relevant portions of the chat history,” Mondlock et al., para [0073]. The input query is related and rephrased using associated chat history (i.e., a particular multi-turn session).); and
processing, by the computing system, the input query and the multi-turn query data to generate the contextually aware query, wherein the contextually aware query is descriptive of the input query and additional details, wherein the additional details are descriptive of a context of the input query based on the multi-turn query data (“The computer-implemented method 800 may include at block 812 causing an intent associated with the user query to be determined. The intent may be determined by the intent classification module 126 or by the LLM service 170. In some aspects, the intent may be determined from a plurality of predetermined intents based upon the user query and a context based upon a session history associated with the user. In some aspects, the intent may comprise one or more keywords,” Mondlock et al., para [0154].). Mondlock et al. benefits Chaudhary et al. by applying a generative AI pipeline for receiving a user query, fetching relevant external data, and submitting a prompt to cause the LLM to answer the user query based on provided relevant decentralized external data, thereby enabling organizations to deploy a generative AI pipeline that accesses a plurality of data sources in a federated model (Mondlock et al., para [0004]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Mondlock et al. to allow Chaudhary et al. to use computing resources more efficiently.
As to claim 10, system claim 18 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 10 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Chaudhary et al., para [0037] and fig. 2, teach a processor, memory, and instructions.
Regarding claim 2, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 1, wherein the machine-learned embedding model was trained to generate embeddings that map embeddings associated with similar query intents to a shared embedding cluster (“The machine learning model may then use the clusters to predict user intents in real-time based on user interactions with the corresponding domain. Each training dataset for each corresponding domain may similarly be trained using the data processing rules, data embedding rules, and the training rules. The machine learning model(s) may then be deployed on corresponding domain(s) to accurately and efficiently predict user intents in real-time as new data (e.g., user query, user interaction, user question) is received from or at the domain(s). The output(s) of the trained machine learning model may then be used by conversational interaction computing device 102 to perform operations, such as but not limited to, provide query results (e.g., answers, reaction to the query, perform actions) in real-time or near real-time,” Chaudhary et al., para [0036].).
Regarding claim 3, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 1, the method further comprising:
determining, by the computing system and based on the query embedding cluster, one or more attributes associated with the query embedding, wherein the one or more attributes are descriptive of a particular topic associated with at least one of the input query or with the multi-turn query data (Chaudhary et al., para [0036]. Here, the domain is synonymous with a topic, and the training data comprises attributes corresponding to a domain.).
Regarding claim 4, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 1, wherein the query embedding is associated with a query-intent pair comprising the contextually aware query and a query intent (“In other embodiments of the present disclosure, a method for generating a first environment based on configurations rules in a conversational interaction context is provided. In one embodiment, a method can include obtaining a first dataset associated with a first domain, the first dataset includes a plurality of first phrase-intent pairs. Each of the first phrase-intent pair includes a first phrase and a corresponding first intent,” Chaudhary et al., para [0006].), wherein the query intent is associated with an intent of the input query and the contextually aware query (Chaudhary et al., para [0036].).
Regarding claim 11, Chaudhary et al., as modified by Mondlock et al., discloses the computing system of claim 10, wherein the machine-learned language model comprises a generative language model pre-trained on a diverse variety of content and text to perform a plurality of different language processing tasks (“The LLM service 170 may be owned or operated by an LLM provider. The LLM service 170 may include an LLM model. An LLM is a type of artificial intelligence (AI) algorithm that uses deep learning techniques to perform a number of natural language processing (NLP) tasks, such as understanding, summarizing, generating, and/or predicting new content. LLMs generate output by predicting the next token or word in a sequence. LLMs are pre-trained with vast data sets,” Mondlock et al., para [0029].).
Regarding claim 13, Chaudhary et al., as modified by Mondlock et al., discloses the computing system of claim 10, wherein the query embedding cluster further comprises a plurality of different queries associated with one or more shared attributes, wherein the one or more shared attributes are associated with one or more query intents (Chaudhary et al., para [0036]. Here, the domain is synonymous with a topic.).
Claim(s) 5-6 and 12 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US,20220277741 hereinafter referred to as Chaudhary et al., in view of US 20250131247, hereinafter referred to as Mondlock et al., and further in view of US 20230401238, hereinafter referred to as Khan et al.
Regarding claim 5, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 4, but not wherein determining a query embedding cluster associated with the query embedding comprises: mapping, by the computing system, the query-intent pair to an embedding space; and determining, by the computing system and based on at least one of the query embedding or an intent embedding, the query-intent pair is associated with a plurality of other embeddings associated with a node within a query graph.
Khan et al. is cited to disclose mapping, by the computing system, the query-intent pair to an embedding space (“In further configurations, a core intent category is selected for a category cluster, and when a search query is mapped to a category in the cluster, the core intent category is selected and used for categorizing the search query,” Khan et al., para [0050].); and
determining, by the computing system and based on at least one of the query embedding or an intent embedding, the query-intent pair is associated with a plurality of other embeddings associated with a node within a query graph (Khan et al., fig. 3.). Khan et al. benefits Chaudhary et al. by identifying the intent of a search query and returning the most relevant items as search results, which may include identifying a category for a search query and either filtering search results for items within that category or ranking search results based on the identified category (Khan et al., para [0001]-[0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Khan et al. to enhance the user search experience of Chaudhary et al.
Regarding claim 6, Chaudhary et al., as modified by Mondlock et al. and Khan et al., discloses the computer-implemented method of claim 1, but not wherein the query embedding cluster is associated with a learned intent graph, wherein the learned intent graph comprises: a plurality of nodes, wherein each node represents a cluster of queries with related query intents; and a plurality of edges, wherein the plurality of edges connects nodes with related node intents
Khan et al. is cited to disclose wherein the query embedding cluster is associated with a learned intent graph, wherein the learned intent graph comprises: a plurality of nodes, wherein each node represents a cluster of queries with related query intents (Khan et al., figs. 2-3.); and
a plurality of edges, wherein the plurality of edges connects nodes with related node intents (Khan et al., figs. 2-3.). Khan et al. benefits Chaudhary et al. by identifying the intent of a search query and returning the most relevant items as search results, which may include identifying a category for a search query and either filtering search results for items within that category or ranking search results based on the identified category (Khan et al., para [0001]-[0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Khan et al. to enhance the user search experience of Chaudhary et al.
Regarding claim 12, Chaudhary et al., as modified by Mondlock et al., discloses the computing system of claim 10, wherein the query embedding cluster is a cluster of embeddings associated with a plurality of different queries with a similar query intent to the multi-turn aware query (Chaudhary et al., para [0036].), but not wherein the query embedding cluster is associated with a node within a task graph, and wherein the task graph comprises a plurality of learned nodes associated with a plurality of different query tasks.
Khan et al. is cited to disclose wherein the query embedding cluster is associated with a node within a task graph, wherein the task graph comprises a plurality of learned nodes associated with a plurality of different query tasks (Khan et al., figs. 2 and 3. It is noted that the category nodes may represent a task.). Khan et al. benefits Chaudhary et al. by identifying the intent of a search query and returning the most relevant items as search results, which may include identifying a category for a search query and either filtering search results for items within that category or ranking search results based on the identified category (Khan et al., para [0001]-[0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Khan et al. to enhance the user search experience of Chaudhary et al.
Claim(s) 7 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US,20220277741 hereinafter referred to as Chaudhary et al., in view of US 20250131247, hereinafter referred to as Mondlock et al., and further in view of US 11100179, hereinafter referred to as Zhou et al.
Regarding claim 7, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 1, but not further comprising: determining, by the computing system, a plurality of media content items based on the query embedding cluster. Zhou et al. is cited to disclose determining, by the computing system, a plurality of media content items based on the query embedding cluster (Zhou et al., col. 21, line 55 – col. 22, line 32. It is noted that “posts” comprise media content.). Zhou et al. benefits Chaudhary et al. by extending the types of media that may be returned as search results to the user, thereby providing the user with more personalized search results. Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Zhou et al. to improve the intent recognition of Chaudhary et al.
Claim(s) 8 and 9 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US,20220277741 hereinafter referred to as Chaudhary et al., in view of US 20250131247, hereinafter referred to as Mondlock et al., and further in view of US 20220405484, hereinafter referred to as Kanchibhotla et al.
Regarding claim 8, Chaudhary et al., as modified by Mondlock et al., discloses the computer-implemented method of claim 1, but not wherein the input query comprises multimodal data, wherein the multimodal data comprises two or more different types of data. Kanchibhotla et al. is cited to disclose wherein the input query comprises multimodal data, wherein the multimodal data comprises two or more different types of data (“Queries in a Multimodal conversation 410 are submitted through a multimodal user interface. The multimodal query input 411 can be a combination of one or more multi modes such as text, speech, image, gesture, touch, map, etc.,” Kanchibhotla et al., para [0092].). Kanchibhotla et al. benefits Chaudhary et al. by allowing a user to provide an input query comprising multimodal data. Therefore, it would be obvious for one skilled in the art to combine the teachings of Chaudhary et al. with those of Kanchibhotla et al. to extend the types of queries a user may provide to the intent recognition system of Chaudhary et al.
Regarding claim 9, Chaudhary et al., as modified by Mondlock et al. and Kanchibhotla et al., discloses the computer-implemented method of claim 8, wherein the multimodal data comprises image data and text data (Kanchibhotla et al., para [0092].), and wherein the contextually aware query is generated based on the image data, the text data, and the multi-turn query data (Kanchibhotla et al., para [0092]. And, “The inter-query context may be determined based on the coherence and intent among the sequence of previous queries in the conversation,” Kanchibhotla et al., para [0033]. The previous queries in the conversation represent multi-turn query data.).
Claim(s) 14-15, 17, and 18 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20240427808 hereinafter referred to as Gudla et al., in view of US 20250131247, hereinafter referred to as Mondlock et al.
Regarding claim 14 (Currently Amended), Gudla et al. discloses one or more non-transitory computer-readable media that collectively store instructions (“For example, the data store 240 may store the set of parameters for a trained machine-learning model on one or more non-transitory, computer-readable media. The data store 240 uses computer-readable media to store data, and may use databases to organize the stored data,” Gudla et al., para [0093].) that, when executed by a computing system, cause the computing system to perform operations, the operations comprising:
obtaining input data, wherein the input data comprises a query (“The query input module 250 receives queries (i.e., conversational inputs) from customers of the online concierge system 14,” Gudla et al., para [0095].);
processing the query with an embedding model to generate a query embedding (“For example, the content presentation module 210 may apply natural language processing (NLP) techniques to the text in the search query to generate a search query representation (e.g., an embedding) that represents characteristics of the search query,” Gudla et al., para [0079].);
processing the input data with a generative model to determine a query intent, wherein the query intent is descriptive of a type of information being requested (“In general, the LLM is utilized to parse a customer's intent from a customer's query and generate a search query with a constraint filter for item types specified in the query,” Gudla et al., para [0064].);
obtaining, based on the query intent, one or more second query embeddings associated with one or more second queries with one or more second query intents, wherein the one or more second queries are associated with a query embedding cluster, wherein the one or more second query intents are associated withsimilar to the query intent of the input data;
evaluating a loss function that evaluates a difference between the query embedding and the one or more second query embeddings associated with the query embedding cluster (“The machine-learning training module 230 may apply an iterative process to train a machine-learning model whereby the machine-learning training module 230 updates parameter values of the machine-learning model based on each of the set of training examples. The training examples may be processed together, individually, or in batches. To train a machine-learning model based on a training example, the machine-learning training module 230 applies the machine-learning model to the input data in the training example to generate an output based on a current set of parameter values. The machine-learning training module 230 scores the output from the machine-learning model using a loss function,” Gudla et al., para [0092]. “The online concierge system 140 may use machine-learning models to perform functionalities described herein. Example machine-learning models include regression models, support vector machines, naïve bayes, decision trees, k nearest neighbors, random forest, boosting algorithms, k-means, and hierarchical clustering,” Gudla et al., para [0089].); and
adjusting one or more parameters of the embedding model based at least in part on the loss function (“The machine-learning training module 230 may apply an iterative process to train a machine-learning model whereby the machine-learning training module 230 updates parameter values of the machine-learning model based on each of the set of training examples. The training examples may be processed together, individually, or in batches. To train a machine-learning model based on a training example, the machine-learning training module 230 applies the machine-learning model to the input data in the training example to generate an output based on a current set of parameter values. The machine-learning training module 230 scores the output from the machine-learning model using a loss function,” Gudla et al., para [0092].).
Gudla et al., though, does not explicitly disclose obtaining, based on the query intent, one or more second query embeddings associated with one or more second queries with one or more second query intents, wherein the one or more second query intents are associated with the query intent of the input data.
Mondlock et al. is cited to disclose obtaining, based on the query intent, one or more second query embeddings associated with one or more second queries with one or more second query intents, wherein the one or more second queries are associated with a query embedding cluster, wherein the one or more second query intents are associated withsimilar to the query intent of the input data (“In some aspects, the intent classification module 126 may include instructions for determining an intent of the query. The intent classification module 126 may receive the query or the query plus retrieved chat history and classify the query into one or more of a plurality of pre-defined intents. The intent classification module 126 may use semantic search to determine intent. The intent determination semantic search may include (1) generating an embedding of each pre-defined intent; (2) generating an embedding of the user query; and (3) comparing the user query embedding to the intent embeddings in embeddings 168 using clustering techniques, such as k-means clustering, to identify relevant intent,” Mondlock et al., para [0040]. This excerpt explains that multiple intents may be embedded and associated with the first query and prior chat history (i.e., additional queries). And, “The query module 128 may incorporate information from the chat history 142 obtained by the chat history module 124 into the user query. For example, a first user query may have asked “What is the current stock price of Acme Corp.?,” and a second user query may ask “What date was its initial public offering?” The query module 128 may replace “its” with “Acme Corp.” in the second user query,” Mondlock et al., para [0041]. This excerpt shows that the chat history may include a user query with an intent associated with the current query.), wherein the one or more second query intents are associated with the query intent of the input data (Mondlock et al., para [0040]-[0041].). Mondlock et al. benefits Gudla et al. by applying chat history to improve the derivation of the user’s query intent, thereby, helping to answer the user query. Therefore, it would be obvious for one skilled in the art to combine the teachings of Gudla et al. with those of Mondlock et al. to improve the intent recognition of Gudla et al.
Regarding claim 15, Gudla et al., as modified by Mondlock et al., discloses the non-transitory computer-readable media of claim 14, wherein the query comprises a rewritten query (“In some aspects, the query module 128 may include instructions for generating an augmented user query from the rephrased user query,” Mondlock et al., para [0042].), wherein the rewritten query was generated by:
obtaining an input query and multi-turn query data, wherein the multi-turn query data is descriptive of previous inputs obtained before the input query, wherein the previous inputs and the input query are associated with a particular multi-turn session (Mondlock et al., para [0041]. This excerpt shows that the chat history may include a user query with an intent associated with the current query. It is also noted that the applicant’s specification describes multi-turn query data as including past inputs prior to query.); and
processing the input query and the multi-turn query data with a language model to generate the rewritten query (Mondlock et al., para [0042].).
Regarding claim 17, Gudla et al., as modified by Mondlock et al., discloses the non-transitory computer-readable media of claim 14, wherein the operations further comprise:
determining an intent embedding is associated with the query intent (“In some aspects, the intent classification module 126 may include instructions for determining an intent of the query. The intent classification module 126 may receive the query or the query plus retrieved chat history and classify the query into one or more of a plurality of pre-defined intents. The intent classification module 126 may use semantic search to determine intent. The intent determination semantic search may include (1) generating an embedding of each pre-defined intent; (2) generating an embedding of the user query; and (3) comparing the user query embedding to the intent embeddings in embeddings 168 using clustering techniques, such as k-means clustering, to identify relevant intent,” Mondlock et al., para [0040].); and
wherein the one or more second query embeddings are obtained based on the intent embedding (Mondlock et al., para [0040].).
Regarding claim 18, Gudla et al., as modified by Mondlock et al., discloses the non-transitory computer-readable media of claim 14, wherein the query intent and the one or more second query intents are associated with one or more particular topics (“The relevant information identification module 130 may perform topic modeling to identify the relevant documents, assets, and experts. The topic modeling may include (1) performing topic modeling on the user query to identify one or more topic keywords; and (2) searching the document collections 144, asset collections 146, expert collections 148, document collections 162, asset collections 164, and/or expert collections 166 with the topic keywords to identify relevant documents, assets, and/or experts,” Mondlock et al., para [0044].), and wherein the type of information comprises additional details associated with the one or more particular topics (Mondlock et al., para [0044]. Here, the topic keywords is a type of information providing additional details associated with a topic.).
Claim(s) 16, 19 and 20 and is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20240427808 hereinafter referred to as Gudla et al., in view of US 20250131247, hereinafter referred to as Mondlock et al., and further in view of US 20230401238, hereinafter referred to as Khan et al.
Regarding claim 16, Gudla et al., as modified by Mondlock et al., discloses the non-transitory computer-readable media of claim 14, but not wherein the one or more second query embeddings are obtained from a query cluster of a data graph associated with a plurality of query clusters. Khan et al. is cited to disclose wherein the one or more second query embeddings are obtained from a query cluster of a data graph associated with a plurality of query clusters (“FIG. 2 is a diagram showing an example of generating an augmented graph, which may be performed, for instance, by the category co-occurrence component 110, the initial embedding component 112, and the graph augmentation component 114 of FIG. 1. As shown in FIG. 2, a category taxonomy 202 is provided in which each node corresponds with a category (e.g., C1, C2, etc.) and each edge between nodes represents a hierarchical relationship between the categories corresponding with the nodes,” Khan et al., para [0042]. And, “As shown at block 402, category embeddings are generated for categories from a category taxonomy using hierarchical data from the category taxonomy and search information. In some configurations, co-occurring categories are determined from the search information. Additionally, initial embeddings are generated for the categories using the search information. The categories embeddings are generated based on hierarchical and co-occurring relationships between categories and the initial embeddings for the categories. In some aspects, this includes augmenting the category taxonomy with the co-occurring category relationships and initial embeddings to provide an augmented graph and generating the category embeddings using the augmented graph. Category clusters are formed using the category embeddings, as shown at block 404,” Khan et al., para [0052]-[0053].). Khan et al. benefits Gudla et al. by identifying the intent of a search query and returning the most relevant items as search results, which may include identifying a category for a search query and either filtering search results for items within that category or ranking search results based on the identified category (Khan et al., para [0001]-[0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Gudla et al. with those of Khan et al. to improve the intent recognition of Gudla et al.
Regarding claim 19, Gudla et al., as modified by Mondlock et al., discloses the non-transitory computer-readable media of claim 14, but not wherein the operations further comprise: generating a remodeled data graph of query clusters based on the query embedding, the intent, and the one or more second query embeddings. Khan et al. is cited to disclose wherein the operations further comprise: generating a remodeled data graph of query clusters based on the query embedding, the intent, and the one or more second query embeddings (“The category taxonomy is augmented based on the co-occurring categories and category embeddings to form an augmented graph, as shown 506. In some aspects, each node in the augmented graph corresponds with a category and is associated with the category embedding for the corresponding category. Edges between nodes are based on both hierarchical data from the category taxonomy and relationships between co-occurring categories,” Khan et al., para [0058]. The augmented graph is a remodeled data graph.). Khan et al. benefits Gudla et al. by identifying the intent of a search query and returning the most relevant items as search results, which may include identifying a category for a search query and either filtering search results for items within that category or ranking search results based on the identified category (Khan et al., para [0001]-[0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Gudla et al. with those of Khan et al. to improve the intent recognition of Gudla et al.
Regarding claim 20, Gudla et al., as modified by Mondlock et al. and Khan et al., discloses the non-transitory computer-readable media of claim 19, wherein the remodeled data graph of query clusters comprises one or more edges associated with tangential topics to the query intent (Khan et al., para [0058]. Co-occurring categories = tangential topics. See also fig. 3).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/ANNE L THOMAS-HOMESCU/Primary Examiner, Art Unit 2659