DETAILED ACTON
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 8/4/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-17, 19, 21-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With respect to claim 1, the limitations directed towards determining that a second query is a subquery or a sibling of the first query, determining that the second query is the subquery or the sibling of the first query, training a machine-learning classifier by adjusting parameter weights of the machine-learning classifier based on the first query and the second query being known queries based on the first specificity score for the first query, generating, using the machine-learning classifier, a third specificity score for a third query, is a process that, under its broadest reasonably interpretation, covers performance of these limitation in the mind and certain methods of organizing human activity but for the recitation of generic computer components. That is, other than reciting a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query, nothing in the claim precludes these steps from practically being performed in the mind and/or by a human with pen and paper.
For example, but for the limitations stating a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query, the mention of determining that a second query is a subquery or a sibling of the first query, determining that the second query is the subquery or the sibling of the first query, training a machine-learning classifier by adjusting parameter weights of the machine-learning classifier based on the first query and the second query being known queries based on the first specificity score for the first query, generating, using the machine-learning classifier, a third specificity score for a third query, in the context of this claim, encompasses mentally determining scores for a series of queries and refining queries based on training data wherein the claimed transforming and deriving steps appears to include mental determination steps, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The judicial exception is not integrated into a practical application by additional elements. In particular, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is recited at a high level of generality (i.e., as a generic computer performing a generic computer function of search) such that it amounts to no more than mere instructions to apply the exception. a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is considered by the examiner to be mere data gathering such that it amounts to no more than insignificant extra solution activity. These elements do not integrate the abstract idea into a practical application because it does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea.
This claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is recited at a high level of generality to apply the exception using generic components. The additional elements, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is interpreted to be well understood, routine and conventional activity (Receiving or transmitting data over a network e.g., using the internet to gather data, Symantec (see MPEP 2106.05(d))). Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. To further elaborate, the additional limitations of a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use. Claim 1 is not patent eligible.
Claims 11 and 21 are similarly rejected because they are similar in scope.
With respects to claims 2 and 12, the limitations are directed towards the first specificity score is based on a specificity count of unique items purchased based on the first query; and a co-purchase probability for the first query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine the first specificity score is based on a specificity count of unique items purchased based on the first query and a co-purchase probability for the first query. Therefore, claims 2 and 12 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claims 4 and 14, the limitations are directed towards setting the second specificity score for the second query comprises: setting, based on determining that the second query is the subquery of the first query, the second specificity score for the second query to represent a lower specificity than the first specificity score for the first query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine the second specificity score for the second query by determining, based on another determining that the second query is the subquery of the first query, the second specificity score for the second query to represent a lower specificity than the first specificity score for the first query. Therefore, claims 4 and 14 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claims 5 and 15, the limitations are directed towards setting the second specificity score for the second query comprises: setting, based on determining that the second query is the sibling of the first query, the second specificity score for the second query and the first specificity score for the first query to be equivalent to a maximum specificity of queries that are siblings to the first query and the second query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine the second specificity score for the second query by determing, based on determining that the second query is the sibling of the first query, the second specificity score for the second query and the first specificity score for the first query to be equivalent to a maximum specificity of queries that are siblings to the first query and the second query. Therefore, claims 5 and 15 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claims 6 and 16, the limitations are directed towards determining that the second query is the subquery or the sibling of the first query comprises: excluding attributes of product type, product type descriptor, brand, product line, or miscellaneous in determining that the second query is the sibling of the first query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine that the second query is the is the subquery or the sibling of the first query further by excluding attributes of product type, product type descriptor, brand, product line, or miscellaneous in determining that the second query is the sibling of the first query. Therefore, claims 6 and 16 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claims 7 and 17, the limitations are directed towards setting the second specificity score for the second query comprises: applying token-level comparison attributes across identical attributes of the first query and the second query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine the second specificity score for the second query by applying token-level comparison attributes across identical attributes of the first query and the second query. Therefore, claims 7 and 17 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claim 8, the limitations are directed towards the machine-learning classifier is a binary classifier. These elements are interpreted to be just data (e.g. contents) and does not meet any of the categories. Therefore, claim 8 does not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claims 9 and 19, the limitations are directed towards the operations further comprise: determining whether the third specificity score for the third query meets a predetermined threshold. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine whether the third specificity score for the third query meets a predetermined threshold. Therefore, claims 9 and 19 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claim 10, the limitations are directed towards the provides he results comprises: when the third specificity score for the third query meets the predetermined threshold, displaying out-of-stock items in response to a search using the third query. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claim 10 does not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claim 22, the limitations are directed towards wherein the operations further comprise: determining whether the third specificity score for the third query meets a predetermined threshold; and when the third specificity score for the third query meets the predetermined threshold, displaying out-of-stock items in response to a search using the third query. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claim 22 does not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
With respects to claim 23, the limitations are directed towards determining that the second query is equivalent to the first query, the subquery of the first query, or the sibling of the first query comprises: determining that the second query is the subquery of the first query. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can determine that the second query is equivalent to the first query, the subquery of the first query, or the sibling of the first query by determining that the second query is the subquery of the first query. Therefore, claim 23 does not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 5, 11, 14, 15, 21, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Platt, and further in view of Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai.
As to claim 1:
Fusco discloses:
A system comprising: one or more processors; and one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform operations comprising:
receiving a first query [Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries.]
transforming, using a sequence transformer model, the first queries into a first vector of query embeddings [Column 1 Lines 65-67 and Column 2 Lines 1-4 teach each of the candidate multi-word expressions as a distance between an embedding vector corresponding to the identified text snippet and an embedding vector corresponding to the candidate multi-word expression, to select remaining expressions from the candidate multi-word expressions using a function of a specificity value. Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries. Column 11 Lines 34-38 teach a multi-word vector 416 is created from the recognized, 406, multi-word “machine learning” and multiplied (using the vector dot product) by the topic vector 404 if the text snippet 402 giving a distance of, e.g., 0.7. Note: Using a machine learning model generate vector embeddings for a first of a plurality of queries reads on the claims.]
transforming, using a machine-learning classifier, the query vector of queries embeddings into a first specificity score for the first queries [Column 2 Lines 57-64 teach the specificity score value may be determined by using a pre-trained static embedding matrix or by using context dependent embeddings originating from a transformer based-system. For this, a pre-trained bidirectional encoder representations from transformers (BERT) or any other transformer-based language model may be used. Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries. Column 10 Lines 1-3 teaches the method 100 comprises determining (106) a specificity score value for each of the candidate multi-word expressions. Note: The examiner interprets candidate multi-word expressions include the claimed first query and second query, wherein a specificity score is determined from each vector embedding from a query.]
determining that a second queries is a sub-query or a sibling of the first queries [Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries. Column 7 Lines 31-36 teach the term ‘text snippet’ may denote a sequence of words or expressions of the text document. A typical text snippet may be a sentence. However—in particular in case of very long sentences—also sub-sentences may be denoted and used as text snippet, similar to headlines or other shortened statements.]
setting a second specificity score for the second query based on the first specificity score and based on determining that the second query is the subquery or the sibling of the first query [Column 2 Lines 57-64 teach the specificity score value may be determined by using a pre-trained static embedding matrix or by using context dependent embeddings originating from a transformer based-system. For this, a pre-trained bidirectional encoder representations from transformers (BERT) or any other transformer-based language model may be used. Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries. Column 10 Lines 1-3 teaches the method 100 comprises determining (106) a specificity score value for each of the candidate multi-word expressions. Note: The examiner interprets candidate multi-word expressions include the claimed first query and second query, wherein a specificity score is determined from each vector embedding from a query including the claimed second query.]
generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface [Column 2 Lines 57-64 teach the specificity score value may be determined by using a pre-trained static embedding matrix or by using context dependent embeddings originating from a transformer based-system. For this, a pre-trained bidirectional encoder representations from transformers (BERT) or any other transformer-based language model may be used. Column 6 Lines 43-46 teaches document centric knowledge graphs may often use word clouds now comprised in non-phrases (or multi-word expressions) to aid in searches (e.g., for literature) and/or complex queries. Column 10 Lines 1-3 teaches the method 100 comprises determining (106) a specificity score value for each of the candidate multi-word expressions. Note: The examiner interprets candidate multi-word expressions include the claimed first query, second query, and third query wherein a specificity score is determined from each vector embedding from a query including the claimed third query.]
Fusco discloses some of the limitations as set forth in claim 1 but does not appear to expressly disclose training a machine-learning classifier by adjusting parameter weights of the machine-learning classifier based on the first query and the second query being known queries, based on the first score for the first query, based on the second query, and based on the second score for the second query, and providing, for display in the user interface and based on based on the third specificity score, results using the third query.
Biadsy discloses:
training a machine-learning classifier by adjusting parameter weights of the machine-learning classifier based on the first query and the second query being known queries, based on the first score for the first query, based on the second query, and based on the second score for the second query [Paragraph 0029 teaches training each of the domain-specific model components includes adjusting the weights of the at least one of the domain-specific model components that corresponds to the first non-linguistic context based on the generated score, while not adjusting the weights of the baseline language model. Paragraph 0084 teaches a search query history, or a browsing history for the user 102 to determine other scores. For example, in some implementations, user scores may indicate whether the user 102 has previously submitted a search query. Paragraph 0140 teaches weights may be determined for each of the other applications or classes of applications that the model is trained to use in predicting likelihoods. Weights may be determined for each of the other features selected for the model. Paragraph 0146 teaches the corpus 710a may include a set of queries submitted during a certain time range. Note: Adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, by incorporating adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries, as taught by Biadsy (see Paragraph 0029, 0084, 0140, and 0146), because the two publications are directed to query processing; incorporating receiving queries improve the overall client experience incorporating adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries improves the processing accuracies (see Biadsy Paragraph 0051).
Fusco and Biadsy discloses some of the limitations as set forth in claim 1 but does not appear to expressly disclose providing, for display in the user interface and based on based on the third specificity score, results using the third query.
Gabbai discloses:
and providing, for display in the user interface and based on based on the third specificity score, results using the third query [Paragraph 0048 teaches the visually guided search refinement may result in the presentation of first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210. Paragraph 0049 teaches displaying search results, the on-line marketplace may direct the web browser or application to present the search refinement options 210. FIG. 2B illustrates an example of a display illustrating search results. Paragraph 0069 teaches continually provide search refinement options and obtain indications of approvals and/or disapprovals of a user of the provided search refinement options until a specificity of a second search query, which is based on the approval and/or disapprovals of the provided search refinement options, is above the search threshold.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating displaying search results associated with level of specificity for a third query, as taught by Gabbai (see Paragraph 0048, 0049, and 0069), because the three publications are directed to query processing; incorporating receiving queries improve the overall client experience incorporating adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries improves the processing accuracies provide adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Claim(s) 11 and21 are similarly rejected because they are similar in scope.
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, and further in view of Vanderberg (U.S. Publication No.: US 20180032574 A1) hereinafter Vanderberg.
As to claim 4:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1.
Gabbai also discloses:
The system of claim 1, wherein setting the second specificity score for the second query further comprises: setting the second specificity score for the second query to represent a lower specificity than the first specificity score for the first query [Paragraph 0012 teaches obtaining a first search query and comparing a level of specificity of the first search query to a search threshold. When the level of specificity is below the search threshold, the method may include providing visually guided search refinement to construct a second search query. Paragraph 0048 teaches first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210. Paragraph 0067 teaches the level of specificity of the first search query may be determined based on a number of words in the first search query. More words in the first search query may indicate that the first search query includes a higher level of specificity. Note: A second query that has fewer words than a first query resulting in the second query having a lower specificity than a first query that has higher specificity due to having a higher number of words reads on the claims. For example, a first query could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp but not Phillips – in the context of the cited portion of Gabbai, the specificity score could be 19 since it has 19 words. A second query that is a subquery that is smaller (see Vanderberg) could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp – which, in the context of the cited portion of Gabbai, the specificity score would be set to (setting) lower value since it has fewer words.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating a second query that has fewer words than a first query resulting in the second query having a lower specificity than a first query that has higher specificity due to having a higher number of words, as taught by Gabbai (see Paragraph 0012, 0048, and 0067), because the three publications are directed to query processing; incorporating a second query that has fewer words than a first query resulting in the second query having a lower specificity than a first query that has higher specificity due to having a higher number of words provide adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 and some of 4 but does not appear to expressly disclose determining that the second query is a subquery of the first query.
Vanderberg discloses:
determining the second query is a subquery of the first query [Paragraph 0036 teaches when queries are received through a user interface (UI), rather than processing the queries as-is, the queries are dynamically processed to break down each query into a sequence of smaller “chunked” queries.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating queries are dynamically processed to break down each query into a sequence of smaller “chunked” queries, as taught by Vanderberg (see Paragraph 0036), because the four publications are directed to query processing; incorporating queries are dynamically processed to break down each query into a sequence of smaller “chunked” queries produces improved results (see Vanderberg Paragraph 0040).
The examiner further notes Gabbi teaches the specificity of a query is determined based on the word count, Vanderberg teaches dividing a query into subqueries, by combining Gabbi and Vanderberg, the second query (e.g. subquery) of first query which will have less word counts than first query, therefore, the specificity of second query (subquery) is inherently lower than the first query.
Claim 14 is similarly rejected because it is similar in scope.
Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, in view of Lee et al. (U.S. Publication No.: US 20200117760 A1) hereinafter Lee, and further in view of Xu et al. (U.S. Publication No.: US 20130159318 A1) hereinafter Xu.
As to claim 5:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1
Gabbai also discloses:
The system of claim 1, wherein setting the second specificity score for the second query comprises: setting the second specificity score for the second query and the first specificity score for the first query to be equivalent to a specificity of queries that are siblings to the first query and the second query [Paragraph 0012 teaches for each iteration of providing the multiple search refinement options, at least some of the multiple search refinement options are different. Paragraph 0048 teaches first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210. Paragraph 0067 teaches the level of specificity of the first search query may be determined based on a number of words in the first search query. More words in the first search query may indicate that the first search query includes a higher level of specificity. Note: Determining queries (third or fourth query) that have the same word count as a first query and second query (siblings to the first query and the second query) and determining (setting) the specificity for the first and second query to the same maximum specificity third and fourth query. For example, a first query could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp but not Phillips – in the context of the cited portion of Gabbai, the specificity score could be 19 since it has 19 words, a second query that is a sibling based on a determined same number of words (see Lee and Xu) could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp but not Panasonic – in the context of the cited portion of Gabbai, the specificity score could be 19 since it has 19 words, a third query that is a sibling based on a determined same number of words (see Lee and Xu) could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp but not Asus – in the context of the cited portion of Gabbai, the specificity score could be 19 since it has 19 words, and a fourth query that is a sibling based on a determined same number of words (see Lee and Xu) could be, Curved LCD television of greater than 60 inches by Sony or Samsung or LG or Sharp but not Lenovo – in the context of the cited portion of Gabbai, the specificity score could be 19 since it has 19 words. These example queries represent determining (setting) the specificity of a first and second query is equivalent to other sibling queries that have the same number of words and therefore has the same level of specificity.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating a second query that has fewer words than a first query resulting in the second query having a lower specificity than a first query that has higher specificity due to having a higher number of words, as taught by Gabbai (see Paragraph 0012, 0048, and 0067), because the three publications are directed to query processing; incorporating a second query that has fewer words than a first query resulting in the second query having a lower specificity than a first query that has higher specificity due to having a higher number of words provide adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 and some of 4 but does not appear to expressly disclose a maximum and determining that the second query is a sibling of the first query.
Lee discloses:
a maximum [Paragraph 0077 teaches responsive to a determination that the number of words of the query is higher than a maximum number of words (e.g., 10 words, 15 words, etc.), the query and/or the representation of the query may be discarded and/or may not be stored in the search history profile.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Gabbai, Das, and Platt, by incorporating storing queries that do not surpass a maximum number of words, as taught by Lee (see Paragraphs 0077), because the four publications are directed to query processing; incorporating storing queries that do not surpass a maximum number of words provides improvement the functionality of a computer-implemented search engine (see Lee Paragraph 0132).
Fusco, Biadsy, Gabbai, and Lee discloses all of the limitations as set forth in claim 1 and some of 4 but does not appear to expressly disclose determining that the second query is a sibling of the first query.
Xu discloses:
determining that the second query is a sibling of the first query [Paragraphs 0061-0062 teaches heuristics that may be used to identify query pairs may include one or more of the following: Two queries have the same number of words.];
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy, Gabbai, and Lee, by incorporating identifying query pairs based on queries having the same number of words, as taught by Xu (see Paragraphs 0061-0062), because the five publications are directed to query processing; incorporating identifying query pairs based on queries having the same number of words provides improved efficiency (see Xu Paragraph 0035).
The examiner further notes Gabbi teaches the specificity of a query is determined based on the word count, Xu teaches identifying queries that have same number words, Lee teaches a maximum number of words in stored queries, by combining Gabbi, Lee, and Vanderberg, stored third or fourth queries (siblings to the first query and the second query) could have the same maximum number of words as a first and second search query, therefore, the specificity of the first and second query (siblings) is inherently the same as any other query with the same maximum number of words.
Claim 15 is similarly rejected because it is similar in scope.
Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, in view of Winters et al. (U.S. Publication No.: US 20150058108 A1) hereinafter Winters, and in view of Nipko et al. (U.S. Patent No.: US 8775231 B1) hereinafter Nipko.
As to claim 2:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein generating the first specificity score for the first query further comprises: determining a specificity count of unique items purchased based on the first query, determining a co-purchase probability for the first query, and generating the first specificity score for the first query based on the specificity count for the first query and the co-purchase probability for the first query.
Winters discloses:
The system of claim 1, wherein generating the first specificity score is based on a specificity count of unique items purchased based on the first query [Paragraph 0322 teaches the transaction records (301) are aggregated to generate aggregated measurements (e.g., variable values (321)) that are not specific to a particular transaction, such as frequencies of purchases made with different merchants or different groups of merchants, the amounts spent with different merchants or different groups of merchants, and the number of unique purchases across different merchants or different groups of merchants, etc.. Note: Aggregating (a first query) data that includes a count of unique purchases reads on the claims.];
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating aggregating (a first query) data that includes a count of unique purchases, as taught by Winters (see Paragraph 0322), because the four publications are directed to query processing; incorporating aggregating (a first query) data that includes a count of unique purchases improves the capabilities of the aggregated measurements in indicating certain aspects of the spending behavior of the customers (see Winters Paragraph 0364).
Fusco, Biadsy, Gabbai, and Winters discloses all of the limitations as set forth in claim 1 and some of claim 2 but does not appear to expressly disclose determining a co-purchase probability for the first query, and generating the first specificity score for the first query based on the specificity count for the first query and the co-purchase probability for the first query.
Nipko discloses:
a co-purchase probability for the first query [Column 8 Lines 34-38 teach the affinity relationship among product that provides the probability of products being purchased together for the product groups 930 is calculated using a technique such as affinity analysis.];
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy, Gabbai, and Winters, by incorporating the probability of products being purchased together, as taught by Nipko (see Column 8 Lines 34-38), because the five publications are directed to query processing; incorporating the probability of products being purchased together provides efficient identifying reliable purchase pattern profiles through scientific analysis of customer data (see Nipko Abstract).
Claim 12 is similarly rejected because it is similar in scope.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, and further in view of Fujiki et al. (U.S. Patent No.: US 10691702 B1) hereinafter Fujiki.
As to claim 13:
Fusco, Biadsy, and Gabbai discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein propagating the first specificity score for the first query to generate the second specificity score for the second query further comprises: determining that the second query is equivalent to the first query and setting the second specificity score for the second query to be equivalent to the first specificity score for the first query.
Fujiki discloses:
The system of claim 11, wherein setting the second specificity score for the second query comprises setting the second specificity score for the second query to be equivalent to the first specificity score for the first query [Column 17 Lines 40-48 teach server 320 may assign the same score to a particular category for different queries that include different quantities of terms associated with the particular category. For example, assume that a first query includes the terms “film” and “movie,” while a second query includes the term “movie.” In such an implementation, server 320 may assign a score, such as 1, 100, etc., to the particular category for the first query, while assigning the same score to the particular category for the second query. Note: Based on determining the queries are equivalent based on a particular category, assigning the same score to the category of the query.]
based on determining that the second query is equivalent to the first query [Column 17 Lines 40-48 teach server 320 may assign the same score to a particular category for different queries that include different quantities of terms associated with the particular category. For example, assume that a first query includes the terms “film” and “movie,” while a second query includes the term “movie.” In such an implementation, server 320 may assign a score, such as 1, 100, etc., to the particular category for the first query, while assigning the same score to the particular category for the second query. Note: Determining that a first query and second query are equivalent based on a particular category of the query reads on the claims.];
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating determining a first query and second query are equivalent based on a particular category and assigning the same score to the category that is included in the query, as taught by Fujiki (see Column 17 Lines 40-48), because the four publications are directed to query processing; incorporating determining a first query and second query are equivalent based on a particular category and assigning the same score to the category that is included in the query efficiently provides relevant information to a user (see Fujiki Column 6 Line 56).
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, in view of Lee et al. (U.S. Publication No.: US 20200117760 A1) hereinafter Lee, in view of Xu et al. (U.S. Publication No.: US 20130159318 A1) hereinafter Xu, and further in view of Eberlein et al. (European Patent Application No.: EP 2682877 A1) hereinafter Eberlein.
As to claim 6:
Fusco, Biadsy and Gabbai , Lee, and Xu discloses all of the limitations as set forth in claims 1 and 5.
Xu also discloses:
The system of claim 5, wherein determining that the second query is the subquery of the sibling of the first query [Paragraphs 0061-0062 teaches heuristics that may be used to identify query pairs may include one or more of the following: Two queries have the same number of words.]; further comprises
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Gabbai, Das, Platt, and Lee, by incorporating identifying query pairs based on queries having the same number of words, as taught by Xu (see Paragraphs 0061-0062), because the five publications are directed to query processing; incorporating identifying query pairs based on queries having the same number of words provides improved efficiency (see Xu Paragraph 0035).
Fusco, Biadsy, Gabbai, Lee, and Xu discloses all of the limitations as set forth in claim 1, 5, and some of 6 but does not appear to expressly disclose wherein determining that the second query is the sibling of the first query further comprises: excluding product line, attributes of product type, product type descriptor, brand, , or miscellaneous in determining that the second query is the sibling of the first query.
Eberlein discloses:
excluding attributes of product type, product type descriptor, brand, product line, or miscellaneous in determining that the second query is the sibling of the first query [Paragraph 0037 teaches with respect to the product sales data example of FIG 2, mobile analytics engine 110 may dynamically modify query 230 by excluding the "Product" attribute, which has an aggregation grade.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating excluding the "Product" attribute, as taught by Eberlein (see Paragraph 0037), because the seven publications are directed to query processing; incorporating excluding the "Product" attribute provides prioritizing the transfer of desirable attributes (see Eberlein Paragraph 0024).
Claim 16 is similarly rejected because it is similar in scope.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, and further in view of Maheshwari et al. (U.S. Publication No.: US 20220197900 A1) hereinafter Maheshwari.
As to claim 7:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 and 5 but does not appear to expressly disclose wherein propagating the first specificity score for the first query to generate the second specificity score for the second query further comprises: applying token-level comparison attributes across identical attributes of the first query and the second query.
Maheshwari discloses:
The system of claim 1, wherein setting the second specificity score for the second query further comprises:
applying token-level comparison attributes across identical attributes of the first query and the second query [Paragraph 0030 teaches one or more of the similar queries may have identical tokens. Paragraph 0094 teaches each query token may be determined based on structural and/or relationship attributes between different query expressions, as previously described. Note: Determining that a first and second query have identical tokens wherein tokens are associated attributes based on a comparison reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating determining that a first and second query have identical tokens wherein tokens are associated attributes based on a comparison, as taught by Maheshwari (see Paragraph 0030 and 0094), because the four publications are directed to query processing; by incorporating determining that a first and second query have identical tokens wherein tokens are associated attributes based on a comparison provides optimizing query performance (see Eberlein Paragraph 0002).
Claim 17 is similarly rejected because it is similar in scope.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, and further in view of Botros (U.S. Patent No.: US 8498986 B1) hereinafter Botros.
As to claim 8:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein the machine-learning classifier is a binary classifier.
Botros discloses:
The system of claim 1, wherein the machine-learning classifier is a binary classifier [Column 4 Lines 3-4 teaches the adaptive learning machine 102 as an SVM may be a classifier that provides a binary output.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai , by incorporating adaptive learning machine that is a classifier with binary output, as taught by Botros (see Column 4 Lines 3-4), because the four publications are directed to query processing; by incorporating adaptive learning machine that is a classifier with binary output provides an advantage in classifying data (see Botros Column 2 Lines 47-65).
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, and further in view of Laban et a. (U.S. Publication No.: US 20230419048 A1) hereinafter Laban.
As to claim 9:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1.
Gabbai also discloses:
The system of claim 1, wherein the operations further comprise: third specificity score for the third query [[Paragraph 0048 teaches the visually guided search refinement may result in the presentation of first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210. Paragraph 0049 teaches displaying search results, the on-line marketplace may direct the web browser or application to present the search refinement options 210. FIG. 2B illustrates an example of a display illustrating search results. Paragraph 0069 teaches continually provide search refinement options and obtain indications of approvals and/or disapprovals of a user of the provided search refinement options until a specificity of a second search query, which is based on the approval and/or disapprovals of the provided search refinement options, is above the search threshold.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating displaying search results associated with level of specificity for a third query, as taught by Gabbai (see Paragraph 0048, 0049, and 0069), because the three publications are directed to query processing; incorporating receiving queries improve the overall client experience incorporating adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries improves the processing accuracies provide adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein the operations further comprise: determining whether the third specificity score for the third query meets a predetermined threshold.
Laban discloses:
determining whether the specificity score for the query meets a predetermined threshold [Paragraph 0058 teaches answer consolidation submodule 133 may compute the specificity score of the candidate question and compare the specificity score with a threshold value. If the specificity score is less than or equal to the threshold value, the answer consolidation model may determine the candidate question to be a vague question.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy and Gabbai, by incorporating specificity score is less than or equal to the threshold value, as taught by Laban (see Paragraph 0058), because the four publications are directed to query processing; incorporating specificity score is less than or equal to the threshold value improves the user experience(see Laban Paragraph 0025).
Claim 19 is similarly rejected because it is similar in scope.
Claim(s) 10 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fusco et al. (U.S. Patent No.: 11361571 B1) hereinafter Fusco, in view of Biadsy et al. (U.S. Publication No.: US 20180053502 A1) hereinafter Biadsy, Gabbai et al. (U.S. Publication No.: US 20170011136 A1) hereinafter Gabbai, in view of Laban et al. (U.S. Publication No.: US 20230419048 A1) hereinafter Laban, and further in view of Zhang (U.S. Publication No.: US 20210027485 A1) hereinafter Zhang.
As to claim 10:
Fusco, Biadsy, Gabbai, and Laban discloses all of the limitations as set forth in claim 1 and 9.
Gabbai also discloses:
The system of claim 9, wherein the operations further comprises third specificity score and third query [Paragraph 0048 teaches the visually guided search refinement may result in the presentation of first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210. Paragraph 0049 teaches displaying search results, the on-line marketplace may direct the web browser or application to present the search refinement options 210. FIG. 2B illustrates an example of a display illustrating search results. Paragraph 0069 teaches continually provide search refinement options and obtain indications of approvals and/or disapprovals of a user of the provided search refinement options until a specificity of a second search query, which is based on the approval and/or disapprovals of the provided search refinement options, is above the search threshold.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating displaying search results associated with level of specificity for a third query, as taught by Gabbai (see Paragraph 0048, 0049, and 0069), because the three publications are directed to query processing; incorporating receiving queries improve the overall client experience incorporating adjusting weights associated with training machine learning model, wherein the model is trained using a corpus set of queries improves the processing accuracies provide adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Fusco, Biadsy, Gabbai, and Laban discloses all of the limitations as set forth in claim 1, 9, and some of claim 10 but does not appear to expressly disclose when the score for the query meets the predetermined threshold displaying out-of-stock items in response to a search using the query
Zhang discloses:
when the score for the query meets the predetermined threshold displaying out-of-stock items in response to a search using the query [Paragraph 0047 teaches applying the post-processing rules removes, from the list of the identified objects, (i) objects associated with confidence scores that are below a threshold. Paragraph 0124 teaches the neural network used to detect objects and predict their status is trained to detect different breads, pastries, and other bakery items and the display areas that contain them. The annotations for the image 400 include a bounding box for each distinct display region identified by the model, along with a classification of the display region as in-stock, low-stock, or out-of-stock, accompanied by a confidence score for the prediction. Note: Displaying out-of-stock data associated with an item in response to a score that meets a threshold requirement of bellowing a threshold reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco, Biadsy, Gabbai, and Laban, by incorporating displaying out-of-stock data associated with an item in response to a score that meets a threshold requirement of bellowing a threshold, as taught by Zhang (see Paragraph 0047 and 0124), because five applications are directed to query processing; incorporating displaying out-of-stock data associated with an item in response to a score that meets a threshold requirement of bellowing a threshold improves the efficiency of operations (see Zhang Paragraph 0068).
Claim 22 is similarly rejected because it is similar in scope.
As to claim 23:
Fusco, Biadsy and Gabbai discloses all of the limitations as set forth in claim 21.
Gabbai also discloses:
The non-transitory computer-readable medium of claim 21, wherein determining that the second query is equivalent to the first query, the subquery of the first query, or the sibling of the first query comprises: determining that the second query is the subquery of the first query [Paragraph 0048 teaches the visually guided search refinement may result in the presentation of first, second, third, fourth, and fifth search refinement options 210a, 210b, 210c, 210d, and 210e, referred to herein collectively as the search refinement options 210.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Fusco and Biadsy, by incorporating refining query based on a first query, wherein refined queries are siblings or subqueries to the first query, as taught by Gabbai (see Paragraph 0048), because the three publications are directed to query processing; incorporating refining query based on a first query, wherein refined queries are siblings or subqueries to the first query provides adaptive search refinement to assist a user to identify material of interest (see Gabbai Paragraph 0011).
Response to Arguments
Applicant presents the following arguments in August 4, 2025 remarks pages 11-12:
“…For at least the reasons indicated by the Examiner during the interview and without acquiescing in the rejection, the amended independent claims, and the claims that depend thereon, are patent-eligible under 35 U.S.C. § 101...”
Examiner respectfully presents the following response to Applicant’s remarks:
Applicant’s arguments have been fully considered but they are not persuasive. Regarding independent claim 1 but for the limitations stating a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query, the mention of determining that a second query is a subquery or a sibling of the first query, determining that the second query is the subquery or the sibling of the first query, training a machine-learning classifier by adjusting parameter weights of the machine-learning classifier based on the first query and the second query being known queries based on the first specificity score for the first query, generating, using the machine-learning classifier, a third specificity score for a third query, in the context of this claim, encompasses mentally determining scores for a series of queries and refining queries based on training data wherein the claimed transforming and deriving steps appears to include mental determination steps, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the examiner maintains claim 1 recites an abstract idea.
The judicial exception is not integrated into a practical application by additional elements. In particular, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is recited at a high level of generality (i.e., as a generic computer performing a generic computer function of search) such that it amounts to no more than mere instructions to apply the exception. a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is considered by the examiner to be mere data gathering such that it amounts to no more than insignificant extra solution activity. The examiner maintains these elements do not integrate the abstract idea into a practical application because it does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea.
This claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is recited at a high level of generality to apply the exception using generic components. The additional elements, a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query is interpreted to be well understood, routine and conventional activity (Receiving or transmitting data over a network e.g., using the internet to gather data, Symantec (see MPEP 2106.05(d))). Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. To further elaborate, the additional limitations of a system comprising: a processor, and a non-transitory computer-readable medium storing computing instructions that, when executed on the processor, cause the processor to perform operations comprising, receiving a first query, transforming, using a sequence transformer model, the first query into a first vector of query embeddings, transforming, using a machine-learning classifier, the first vector of query embeddings into a first specificity score for the first query, setting a second specificity score for the second query based on the first specificity score, generating, using the machine-learning classifier, a third specificity score for a third query received via a user interface, and providing, for display in the user interface and based on based on the third specificity score, results using the third query does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use. The examiner maintains claim 1 is not patent eligible.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EARL ELIAS whose telephone number is (571)272-9762. The examiner can normally be reached Monday - Friday (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EARL LEVI ELIAS/Examiner, Art Unit 2169
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169