Prosecution Insights
Last updated: April 19, 2026
Application No. 17/731,309

Query Classification with Sparse Soft Labels

Non-Final OA §103§112
Filed
Apr 28, 2022
Examiner
BHUYAN, MOHAMMAD SOLAIMAN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Vui Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
137 granted / 164 resolved
+28.5% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections 2. Claims 1, 9-11, 13 and 19-20 are objected to because of the following informalities: In claim 1 lines 10-11, “the determined weights” should read “the determined label weights”. Appropriate correction is required. In claim 9 line 4, “the tokenized contextual representations” should read “the tokenized contextualized representations”. Appropriate correction is required. In claim 10 line 1, “the tokenized contextual representations” should read “the tokenized contextualized representations”. Appropriate correction is required. In claim 11 line 3, “the catalog of items” should read “the catalogue of items”. Appropriate correction is required. In claim 13 line 15, “the determined weights” should read “the determined label weights”. Appropriate correction is required. In claim 19 line 3, “the catalog of items” should read “the catalogue of items”. Appropriate correction is required. In claim 20 lines 12-13, “the determined weights” should read “the determined label weights”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claim 17 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 17 recites the limitation "the sparsity constraint" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ahmadvand et al. (US 2021/0110208 A1) hereinafter Ahmadvand, in view of Lavergne (US 10,515,125 B1) hereinafter Lavergne. As to claim 1, Ahmadvand discloses a method comprising: receiving data characterizing a plurality of search queries including user provided natural language representations of the plurality of search queries of an item catalogue and first labels associated with the plurality of search queries (Fig. 2-3, Para. 25, the computing system 101 receives a search query dataset. The search query dataset may include a plurality of search queries, i.e., receiving data characterizing a plurality of search queries, each search query including a respective string of characters and a series of one or more words. The search query dataset may include, for each search query, a set of associated labels, i.e., first labels associated with the plurality of search queries, including respective associated category and user intent labels. The labels may be associated with the search queries in the search query dataset. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”.); determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data (Para. 22, “the search engine 112 may be configured to assign multiple labels to an input search query. A label vector made up of multiple labels for a given search query may be processed and then used to configure separate classification networks. In this respect, the search engine 112 is configured to classify a user intent, one or more product categories, and/or other information desired by the user in the search query.”. Para. 27, “the computing system 101 may define a first set of candidate labels and a second set of candidate labels. The first set of candidate labels may be labels for a product category.”. Para. 52, “The training application 115 quantifies the degree of interest for a product or product category in a search session by calculating the click rate and/or time spent actively viewing a webpage(s). If this exceeds a threshold amount, the training application 115 labels the search query with the product categories associated with the session.”. Para. 32, “The compatibility matrix may include relationships between word representations in the search query dataset with their associated labels in the candidate label space. The compatibility matrix may represent the relative spatial information among consecutive words with their associated labels. For example, the compatibility matrix captures the co-occurrence of words such that it indicates instances where a particular order or proximity of words appear at a relatively high frequency.”. Thus, determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data.); and training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight (Para. 10, “The multiple potential labels may be concatenated (e.g., by concatenating two or more matrices), processed, and input into a bifurcated classification layer to train a plurality of classifiers. After configuration, the search engine may classify an intent of the user search query and one or more product categories targeted by the search query. The present disclosure also includes methods and systems for generating training data to train the classifiers.”. Para. 17, “The training application 115 may be used to generate training data. For example, the training application 115 may ingest unlabeled data, apply labels, and generated labeled data for training one or more classifiers in a search engine 112.”. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”. Thus, training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight.). Ahmadvand does not explicitly disclose determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query. However, in the same field of endeavor, Lavergne discloses determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query (Col. 2 lines 56-62, “The system uses the set of classification labels as search indexes to retrieve text segments that have been assigned to the classification labels. By using the classification labels as search indexes to obtain relevant data, the system reduces, for example, the number of database access queries needed to be run to obtain information responsive to the received search query.”. Col. 3 lines 9-18, the system assigns a set of classification labels to each text segment, which then imparts a certain classification to each text segment the system can use to make inferences relating to the associations and relationships of text segments based on the assignment of classification labels. Such inferences can be made without requiring any additional information so that the system can reduce the overall amount of data required to, i.e., reduce a total number of allowed labels, for example, use machine learning to identify text segments that a user is likely to find interesting based on a set of text segments he/she has previously interacted with. Thus, determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ahmadvand by using the assigned classification labels such as the second set of candidate labels of Ahmadvand as search indexes to obtain relevant data in order to reduce the number of database access queries needed to be run to obtain information responsive to the received search query as suggested by Lavergne (Col. 2 lines 56-62). The system initially determines a set of classification labels in response to receiving the user query that are relevant and/or responsive to the received query. One of the ordinary skills in the art would have motivated to make this modification in order to reduce and/or eliminate the necessity to use otherwise computationally-intensive processing techniques to identify and retrieve textual information that is responsive or relevant to a received voice query as suggested by Lavergne (Col. 2, lines 62-67-Col. 3 lines 1-3). As to claim 13, Ahmadvand discloses a system comprising: at least one data processor; and memory coupled to the at least one data processor and storing instructions which, when executed by the at least one data processor (Fig. 4, Para. 57), cause the at least one data processor to perform operations comprising: receiving data characterizing a plurality of search queries including user provided natural language representations of the plurality of search queries of an item catalogue and first labels associated with the plurality of search queries (Fig. 2-3, Para. 25, the computing system 101 receives a search query dataset. The search query dataset may include a plurality of search queries, i.e., receiving data characterizing a plurality of search queries, each search query including a respective string of characters and a series of one or more words. The search query dataset may include, for each search query, a set of associated labels, i.e., first labels associated with the plurality of search queries, including respective associated category and user intent labels. The labels may be associated with the search queries in the search query dataset. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”.); determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data (Para. 22, “the search engine 112 may be configured to assign multiple labels to an input search query. A label vector made up of multiple labels for a given search query may be processed and then used to configure separate classification networks. In this respect, the search engine 112 is configured to classify a user intent, one or more product categories, and/or other information desired by the user in the search query.”. Para. 27, “the computing system 101 may define a first set of candidate labels and a second set of candidate labels. The first set of candidate labels may be labels for a product category.”. Para. 52, “The training application 115 quantifies the degree of interest for a product or product category in a search session by calculating the click rate and/or time spent actively viewing a webpage(s). If this exceeds a threshold amount, the training application 115 labels the search query with the product categories associated with the session.”. Para. 32, “The compatibility matrix may include relationships between word representations in the search query dataset with their associated labels in the candidate label space. The compatibility matrix may represent the relative spatial information among consecutive words with their associated labels. For example, the compatibility matrix captures the co-occurrence of words such that it indicates instances where a particular order or proximity of words appear at a relatively high frequency.”. Thus, determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data.); and training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight (Para. 10, “The multiple potential labels may be concatenated (e.g., by concatenating two or more matrices), processed, and input into a bifurcated classification layer to train a plurality of classifiers. After configuration, the search engine may classify an intent of the user search query and one or more product categories targeted by the search query. The present disclosure also includes methods and systems for generating training data to train the classifiers.”. Para. 17, “The training application 115 may be used to generate training data. For example, the training application 115 may ingest unlabeled data, apply labels, and generated labeled data for training one or more classifiers in a search engine 112.”. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”. Thus, training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight.). Ahmadvand does not explicitly disclose determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query. However, in the same field of endeavor, Lavergne discloses determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query (Col. 2 lines 56-62, “The system uses the set of classification labels as search indexes to retrieve text segments that have been assigned to the classification labels. By using the classification labels as search indexes to obtain relevant data, the system reduces, for example, the number of database access queries needed to be run to obtain information responsive to the received search query.”. Col. 3 lines 9-18, the system assigns a set of classification labels to each text segment, which then imparts a certain classification to each text segment the system can use to make inferences relating to the associations and relationships of text segments based on the assignment of classification labels. Such inferences can be made without requiring any additional information so that the system can reduce the overall amount of data required to, i.e., reduce a total number of allowed labels, for example, use machine learning to identify text segments that a user is likely to find interesting based on a set of text segments he/she has previously interacted with. Thus, determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ahmadvand by using the assigned classification labels such as the second set of candidate labels of Ahmadvand as search indexes to obtain relevant data in order to reduce the number of database access queries needed to be run to obtain information responsive to the received search query as suggested by Lavergne (Col. 2 lines 56-62). The system initially determines a set of classification labels in response to receiving the user query that are relevant and/or responsive to the received query. One of the ordinary skills in the art would have motivated to make this modification in order to reduce and/or eliminate the necessity to use otherwise computationally-intensive processing techniques to identify and retrieve textual information that is responsive or relevant to a received voice query as suggested by Lavergne (Col. 2, lines 62-67-Col. 3 lines 1-3). As to claim 20, Ahmadvand discloses a non-transitory computer readable medium storing instructions which, when executed by at least one data processor forming part of at least one computing system (Fig. 4, Para. 57), cause the at least one data processor to perform operations comprising: receiving data characterizing a plurality of search queries including user provided natural language representations of the plurality of search queries of an item catalogue and first labels associated with the plurality of search queries (Fig. 2-3, Para. 25, the computing system 101 receives a search query dataset. The search query dataset may include a plurality of search queries, i.e., receiving data characterizing a plurality of search queries, each search query including a respective string of characters and a series of one or more words. The search query dataset may include, for each search query, a set of associated labels, i.e., first labels associated with the plurality of search queries, including respective associated category and user intent labels. The labels may be associated with the search queries in the search query dataset. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”.); determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data (Para. 22, “the search engine 112 may be configured to assign multiple labels to an input search query. A label vector made up of multiple labels for a given search query may be processed and then used to configure separate classification networks. In this respect, the search engine 112 is configured to classify a user intent, one or more product categories, and/or other information desired by the user in the search query.”. Para. 27, “the computing system 101 may define a first set of candidate labels and a second set of candidate labels. The first set of candidate labels may be labels for a product category.”. Para. 52, “The training application 115 quantifies the degree of interest for a product or product category in a search session by calculating the click rate and/or time spent actively viewing a webpage(s). If this exceeds a threshold amount, the training application 115 labels the search query with the product categories associated with the session.”. Para. 32, “The compatibility matrix may include relationships between word representations in the search query dataset with their associated labels in the candidate label space. The compatibility matrix may represent the relative spatial information among consecutive words with their associated labels. For example, the compatibility matrix captures the co-occurrence of words such that it indicates instances where a particular order or proximity of words appear at a relatively high frequency.”. Thus, determining, using the received data, label weights characterizing a frequency of occurrence of the first labels within the received data.); and training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight (Para. 10, “The multiple potential labels may be concatenated (e.g., by concatenating two or more matrices), processed, and input into a bifurcated classification layer to train a plurality of classifiers. After configuration, the search engine may classify an intent of the user search query and one or more product categories targeted by the search query. The present disclosure also includes methods and systems for generating training data to train the classifiers.”. Para. 17, “The training application 115 may be used to generate training data. For example, the training application 115 may ingest unlabeled data, apply labels, and generated labeled data for training one or more classifiers in a search engine 112.”. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”. Thus, training a classifier using the plurality of search queries, the second labels, and the determined weights, the classifier trained to predict, from an input search query, a prediction weight and at least one prediction label associated with the prediction weight.). Ahmadvand does not explicitly disclose determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query. However, in the same field of endeavor, Lavergne discloses determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query (Col. 2 lines 56-62, “The system uses the set of classification labels as search indexes to retrieve text segments that have been assigned to the classification labels. By using the classification labels as search indexes to obtain relevant data, the system reduces, for example, the number of database access queries needed to be run to obtain information responsive to the received search query.”. Col. 3 lines 9-18, the system assigns a set of classification labels to each text segment, which then imparts a certain classification to each text segment the system can use to make inferences relating to the associations and relationships of text segments based on the assignment of classification labels. Such inferences can be made without requiring any additional information so that the system can reduce the overall amount of data required to, i.e., reduce a total number of allowed labels, for example, use machine learning to identify text segments that a user is likely to find interesting based on a set of text segments he/she has previously interacted with. Thus, determining second labels, the determining including removing or changing the first labels from the received data to reduce a total number of allowed labels for at least one search query.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ahmadvand by using the assigned classification labels such as the second set of candidate labels of Ahmadvand as search indexes to obtain relevant data in order to reduce the number of database access queries needed to be run to obtain information responsive to the received search query as suggested by Lavergne (Col. 2 lines 56-62). The system initially determines a set of classification labels in response to receiving the user query that are relevant and/or responsive to the received query. One of the ordinary skills in the art would have motivated to make this modification in order to reduce and/or eliminate the necessity to use otherwise computationally-intensive processing techniques to identify and retrieve textual information that is responsive or relevant to a received voice query as suggested by Lavergne (Col. 2, lines 62-67-Col. 3 lines 1-3). As to claims 2 and 14, the claims are rejected for the same reasons as claims 1 and 13 above. In addition, Lavergne discloses wherein the determining the second labels includes determining a probability distribution of the second labels, and wherein training the classifier includes using the probability distribution (Col. 11 lines 35-44, “Semantic scores 206B include scores representing different statistical metrics that are computed for the text segment 201. The statistical metrics can include summary statistics such as the number of characters that are included in the text segment 201, the number of words that are included in the text segment 201, and the number of sentences in the text segment 201. Additionally, the statistical metrics can also include analytical statistics such as, for example, a linguistic complexity score representing a determined linguistic complexity for the text segment 201”. Col. 12 lines 12-18, “The relevancy determiner 230 uses the metadata 202B extracted by the document processor 210 and the text analysis data 206 generated by the text processor 210 to assign classification labels 208 to the text segment 201. The relevancy determiner 230 includes a classifier 230 that is trained to classify the text segment 201 based on a set of attributes,”. Col. 13 lines 3-6, “a classifier 230 can determine that the text segment 201 should be assigned to a certain classification label if the number common attributes between the certain classification label and the text segment 201 exceeds a threshold number.”. Thus, the determining the second labels includes determining a probability distribution of the second labels, and wherein training the classifier includes using the probability distribution.). As to claims 3 and 15, the claims are rejected for the same reasons as claims 1 and 13 above. In addition, Lavergne discloses wherein the item catalogue categorizes items by a hierarchical taxonomy, wherein the first labels are categories included in the item catalogue and wherein the first labels are determined based on user behavior associated with the plurality of search queries (Fig. 3, Col. 21, lines 16-23, “the classification labels that are assigned to the text segment are specified within a hierarchal classification structure. For example, as depicted in FIG. 3, the hierarchal classification structure 300 includes classification labels 304, 306 and 316, which have values assigned to one hierarchal level, and classification hierarchies 308, 312, and 314 that have values assigned to multiple hierarchal levels.”. Col. 13, lines 15-22, “the training data can include user-submitted classification data that identifies text segments that have been manually classified with classification labels. Additionally, the training data user to perform classification can be periodically updated such that the classification techniques applied by the classifier 232 reflect changing patterns of, for example, online user behavior, topics that are presently of interest to users.”. Thus, the item catalogue categorizes items by a hierarchical taxonomy, wherein the first labels are categories included in the item catalogue and wherein the first labels are determined based on user behavior associated with the plurality of search queries.). As to claims 4 and 16, the claims are rejected for the same reasons as claims 3 and 15 above. In addition, Lavergne discloses further comprising pruning the categories in the item catalogue to limit the number of allowed labels, the pruning based on a count of the labels occurring within the received data (Col. 2 lines 56-62, “The system uses the set of classification labels as search indexes to retrieve text segments that have been assigned to the classification labels. By using the classification labels as search indexes to obtain relevant data, the system reduces, for example, the number of database access queries needed to be run to obtain information responsive to the received search query.”. Col. 3 lines 9-18, the system assigns a set of classification labels to each text segment, which then imparts a certain classification to each text segment the system can use to make inferences relating to the associations and relationships of text segments based on the assignment of classification labels. Such inferences can be made without requiring any additional information so that the system can reduce the overall amount of data required to, i.e., pruning the categories, for example, use machine learning to identify text segments that a user is likely to find interesting based on a set of text segments he/she has previously interacted with. Col. 10 lines 54-63, “The author data 204B identifies individual attributes that can similarly be used by the processor 220 and/or the relevancy determiner 230 to make predictive inferences on the attributes associated with the text segment 201. For example, the "POPULARITY DATA" includes a set of metrics that represents a social media presence of the author "JOHN DOE." The set of metrics includes a number of social media posts that the author has recently made, the number of followers that the author has, and a total number of interactions that involve the author.”. Thus, pruning the categories in the item catalogue to limit the number of allowed labels, the pruning based on a count of the labels occurring within the received data.). As to claim 5, the claim is rejected for the same reasons as claim 1 above. In addition, Lavergne discloses wherein determining the second labels includes applying a sparsity constraint to the first labels (Col. 11 lines 35-44, “Semantic scores 206B include scores representing different statistical metrics that are computed for the text segment 201. The statistical metrics can include summary statistics such as the number of characters that are included in the text segment 201, the number of words that are included in the text segment 201, and the number of sentences in the text segment 201. Additionally, the statistical metrics can also include analytical statistics such as, for example, a linguistic complexity score representing a determined linguistic complexity for the text segment 201”. Thus, determining the second labels includes applying a sparsity constraint to the first labels.). As to claims 6 and 17, the claims are rejected for the same reasons as claims 5 and 16 above. In addition, Lavergne discloses wherein applying the sparsity constraint to the first labels includes computing a metric and removing or changing labels within the first labels that satisfy the metric (Col. 11 lines 35-44, “Semantic scores 206B include scores representing different statistical metrics that are computed for the text segment 201. The statistical metrics can include summary statistics such as the number of characters that are included in the text segment 201, the number of words that are included in the text segment 201, and the number of sentences in the text segment 201. Additionally, the statistical metrics can also include analytical statistics such as, for example, a linguistic complexity score representing a determined linguistic complexity for the text segment 201.”. Col. 3 lines 9-18, the system assigns a set of classification labels to each text segment, which then imparts a certain classification to each text segment the system can use to make inferences relating to the associations and relationships of text segments based on the assignment of classification labels. Such inferences can be made without requiring any additional information so that the system can reduce the overall amount of data required to, i.e., removing or changing labels, for example, use machine learning to identify text segments that a user is likely to find interesting based on a set of text segments he/she has previously interacted with.). As to claims 7 and 18, the claims are rejected for the same reasons as claims 5 and 16 above. In addition, Lavergne discloses wherein the second labels are represented as a sparse array (Col. 2 lines 56-62, “The system uses the set of classification labels as search indexes to retrieve text segments that have been assigned to the classification labels. By using the classification labels as search indexes to obtain relevant data, the system reduces, for example, the number of database access queries needed to be run to obtain information responsive to the received search query.”. Col. 17 lines 39-49, “the user may filter the collection of quotes according to communicator using the interface 610. The interface 610 may organize the list of communicators in alphabetical order and after receiving a selection of a particular communicator on the interface 610, filter the collection of quotes by using the particular communicator as a search index. In the second example, the user may filter the collection of quotes according to topics using the interface 620. The interface 620 may organize the topics using the classification labels submitted by users associated with the system.”. Thus, the second labels are represented as a sparse array.). As to claim 8, the claim is rejected for the same reasons as claim 1 above. In addition, Ahmadvand discloses further comprising splitting the received data into at least a training set, a development set, and a test set (Para. 10, “The present disclosure improves a search engine by using training data, query labeling, joint learning, multitask learning, and classifiers to provide search results that enable a user to better navigate an e-commerce website or other electronic interface with a search engine.”. Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”.). As to claim 9, the claim is rejected for the same reasons as claim 1 above. In addition, Ahmadvand discloses wherein training the classifier includes determining, using a natural language model, contextualized representations for words in the natural language representation, tokenizing the contextualized representations, and wherein the training the classifier is performed using the tokenized contextual representations (Para. 19, “Training data 127 includes data that has been labeled for purposes of training a classifier. The training data 127 may include, for example, paired user queries and a defined user intent associated with each query, and/or paired user queries and one or more product categories in which the user intended to obtain search results.”. Para. 45, “the classification layer of the search engine 112 may comprise separate neural networks to perform separate classifications. A first neural network may be a product category network while a second neural network may be an intent modelling network. The classification networks may be trained over a plurality of generations, using one or more of the search queries in the search query dataset and the associated product classification and user intent labels as positive and negative examples for the networks.”. Thus, training the classifier includes determining, using a natural language model, contextualized representations for words in the natural language representation, tokenizing the contextualized representations, and wherein the training the classifier is performed using the tokenized contextual representations.). As to claim 10, the claim is rejected for the same reasons as claim 9 above. In addition, Ahmadvand discloses wherein the tokenized contextual representations are input to a multilayer feed forward neural network with a nonlinear function in between at least two layers of the multilayer feed forward neural network (Para. 17, “The search engine 112 may comprise a classification layer that implements a neural network to generate search results. The training application 115 may be used to generate training data. For example, the training application 115 may ingest unlabeled data, apply labels, and generated labeled data for training one or more classifiers in a search engine 112.”. Para. 45, “the classification layer of the search engine 112 may comprise separate neural networks to perform separate classifications. A first neural network may be a product category network while a second neural network may be an intent modelling network. The classification networks may be trained over a plurality of generations, using one or more of the search queries in the search query dataset and the associated product classification and user intent labels as positive and negative examples for the networks.”. Thus, the tokenized contextual representations are input to a multilayer feed forward neural network with a nonlinear function in between at least two layers of the multilayer feed forward neural network.). As to claims 11 and 19, the claims are rejected for the same reasons as claims 1 and 13 above. In addition, Ahmadvand discloses further comprising: receiving an input query characterizing a user provided natural language representation of an input search query of the catalog of items; determining, using the trained classifier, a second prediction weight, and a second prediction label; executing the input query on the item catalogue and using the second prediction weight and the second prediction label; and providing results of the input query execution (Fig. 2, Para. 17, “The search engine 112 may be a module that receives search queries and generates search results. The search engine 112 works in conjunction with the e-commerce platform 109 to serve one or more links to webpages to allow the user to navigate a website managed by the e-commerce platform 109. The search engine 112 may comprise a classification layer that implements a neural network to generate search results. The training application 115 may be used to generate training data. For example, the training application 115 may ingest unlabeled data, apply labels, and generated labeled data for training one or more classifiers in a search engine 112.”. Para. 22, “The present disclosure is directed to classifying search queries to generate multiple labels for improved search results. To briefly summarize, the search engine 112 may be configured to assign multiple labels to an input search query. A label vector made up of multiple labels for a given search query may be processed and then used to configure separate classification networks. In this respect, the search engine 112 is configured to classify a user intent, one or more product categories, and/or other information desired by the user in the search query.”.). As to claim 12, the claim is rejected for the same reasons as claim 1 above. In addition, Lavergne discloses wherein the training further includes determining a cost of error measured based on a distance between labels within a hierarchical taxonomy (Col. 2 lines 62-67-Col. 3 lines 1-3, “classification labels can be arranged within a hierarchy and/or associated with certain predetermined attributes such that the retrieval of text segments using assigned classification labels as search indexes reduces and/or eliminates the necessity to use otherwise computationally-intensive processing techniques, e.g., semantic analysis, NLP, etc. to identify and retrieve textual information that is responsive or relevant to a received voice query.”. Col. 21 lines 16-25, “the classification labels that are assigned to the text segment are specified within a hierarchal classification structure. For example, as depicted in FIG. 3, the hierarchal classification structure 300 includes classification labels 304, 306 and 316, which have values assigned to one hierarchal level, and classification hierarchies 308, 312, and 314 that have values assigned to multiple hierarchal levels.”. Thus, the training further includes determining a cost of error measured based on a distance between labels within a hierarchical taxonomy.). Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Engebretsen (US 8,560,539 B1) teaches query classification. Pedro et al. (US 2009/0024615 A1) teaches creating and searching medical ontologies. KUSHNIR (US 2014/0317034 A1) teaches data classification. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD SOLAIMAN BHUYAN whose telephone number is (571)272-7843. The examiner can normally be reached on Monday - Friday 9:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Robert Beausoliel can be reached on 571-272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571 -273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMAD S BHUYAN/Examiner, Art Unit 2167 /ROBERT W BEAUSOLIEL JR/Supervisory Patent Examiner, Art Unit 2167
Read full office action

Prosecution Timeline

Apr 28, 2022
Application Filed
Dec 09, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530370
INCREASING FAULT TOLERANCE IN A MULTI-NODE REPLICATION SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12517883
DATABASE INDEXING IN PERFORMANCE MEASUREMENT SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12499136
METHOD FOR UPDATING A DATABASE OF A GEOLOCATION SERVER
2y 5m to grant Granted Dec 16, 2025
Patent 12493613
METHOD AND APPRATUS FOR PROVING A SHARED DATABASE CONNECTION IN A BATCH ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12493589
Efficient Construction and Querying Progress of a Concurrent Migration
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+22.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month