DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 2-21 remain pending, and are rejected.
Claim 1 has been cancelled.
Response to Arguments
Applicant’s arguments filed on 11/21/2025 with respect to the rejection under 35 U.S.C. 101 have been fully considered, but are not persuasive for at least the following rationale:
Applicant’s arguments filed on 11/21/2025 with respect to the rejection under 35 U.S.C. 101 for claims directed to a judicial exception are not persuasive.
Notably, on pages 10-11 of the Applicant’s Remarks, arguments are made that the claims do not recite a judicial exception as the claims involve a chain of distinct steps that are each operating on specific data inputs to produce specific data outputs. Comparisons are drawn to Example 39 of the Subject Matter Eligibility Examples in that the training of the neural network was found not to be a judicial exception, and similarly, the present claims recite converting embeddings, using a machine learning model, and applying a series of encoder blocks. On pages 12-13, the Applicant argues that the claims integrate any judicial exception into a practical application, and that the claims as a whole recite a specific, technical solution to a technical problem, such as the inability to use existing complex machine learning models in “time constrained systems that require an output from the machine learning system more quickly than feasible by traditional machine learning techniques”, and recite a technical architecture that solves this technical latency. It is argued that the claimed solution bypasses the time-consuming step of generating an embedding for the current user input in real-time by searching for a stored record matching the current input as opposed to generating an embedding for the current input.
Examiner respectfully disagrees. The claims only recite applying various machine learning techniques to the abstract idea of using historical queries in order to predict a current session. The claims merely perform calculation on input information to output information for the abstract idea, and the claims do not effect any computer functionality. The comparisons to Example 39 are also inapposite. The claims in Example 39 were not found to be eligible only because they did not recite a specific mathematical formula or because it recited training the neural network. Example 39 was found eligible because the claims were not directed to any abstract idea, the analysis merely noted that the limitations may be based on some mathematical concepts, but the claims were not directed to these concepts. The claims recited applying transformations to digital images, not transformations of mere information, but by altering the images through mirroring, rotating, smoothing, or contrasting to create the training set. As such, the claims were not directed to any mathematical concepts or certain methods of organizing human activity. Unlike Example 39, the present claims are directed to processing past query information in order to find matching information for a present query in order to score and output predicted next search terms, which is an abstract idea as mental processes. The machine learning elements are also recited very generally in the claims, merely recited in passing to perform calculations on the information, but without any specificity as to how these elements function. Even further, the specification does not provide any meaningful disclosure to how these elements function. For example, the only disclosure of the encoder blocks occur in paragraph [0048], and does not provide any description or explanation to the encoder blocks, but only recites that the model can be trained using encoder blocks, without any further description.
While the specification describes a problem of “time constrained systems that require an output from the machine learning system more quickly than feasible by traditional machine learning techniques”, the claims do not recite any improvements to any technical field. The claims merely preprocess information for quicker retrieval, and do not change or improve how machine learning models themselves function. The processing of information merely happen at a different time such that they are not performed at the time of the current query. The claims are directed to matching received search terms with past data in order to generate a candidate set of next actions, scoring them, and selecting the predicted search term. Any improvements to the computer’s ability to process data or how machine learning functions is unchanged; any improvements in processing time are within the abstract idea itself, and is merely consequential to an improved abstract idea, and would not be in effect in any other application.
In view of the above, the rejection under 35 U.S.C. 101 has been maintained below.
Applicant’s arguments filed on 11/21/2025 with respect to the rejection under 35 U.S.C. 103 have been fully considered, but are moot in light of new grounds of rejection. Applicant’s amendments necessitated new grounds of rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 2-21 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception without significantly more.
Step 1:
Taking Claim 9 as representative, claim 9 sets forth the following limitations reciting the abstract idea of predicting search terms based on past search terms:
obtaining sequences of search terms received from clients over a specified period of time, wherein each search term among the sequences of search terms comprises alphanumeric text;
converting the alphanumeric text of each given sequence of search terms, among the sequences of search terms, into a corresponding embedding of the given sequence of search terms that differs from the alphanumeric text of the given sequence of search terms;
storing each given sequence of search terms in association with the corresponding embedding of the given sequence of search terms;
after the obtaining, converting and storing:
receiving, from a client, a search term input to a search, wherein the search term input includes a most recently input search term;
matching the search term input to a matching sequence of search terms among the sequences of search terms;
determining that the corresponding embedding previously (i) created by the converting of the matching sequence of terms and (ii) stored in association with the matching sequence of search terms is an inferred embedding of the search term input received from the client based on the matching of the search term input to the matching sequence of search terms, and independent of converting the search term input into a corresponding search term input embedding;
selecting a set of candidate next actions based on the matching of the search term input to the matching sequence of search terms;
generating a list of scores for the set of candidate next actions;
selecting at least one predicted next search term predicted to follow the most recently input search term based on the inferred embedding and the list of scores;
outputting, to the client, the at least one predicted next search term to the client as a recommended completion to the search term input.
The recited limitations above set forth the abstract idea of predicting search terms based on past search terms. These limitations amount to mental processes, including observation, evaluation, and judgment. The claims recite the receiving of past search terms, and matching a current search with the past search terms to predict search terms that will be entered (see specification [0011] disclosing search suggest systems using historically-context-aware approach with the ability for using current user input, and that systems that require lower latency and/or computing resources can benefit (as in the claims are not directed to the ability of the computer in the lower latency and/or computer resource system)), which is a mental process of observing and evaluating.
Such concepts have been identified by the courts as abstract ideas (see: 2106.04(a)(2)).
Step 2A (Prong 2):
Examiner acknowledges that representative claim 9 recites additional limitations in the claims, such as:
one or more computers and one or more storage devices storing instructions that, upon execution by the one or more computers, cause the one or more computers to perform operations;
client devices;
by a machine learning model;
a search interface;
based on application of a machine learning model to the inferred embedding;
by applying a series of encoder blocks to the set of candidate next actions;
Taken individually and as a whole, representative claim 9 does not integrate the recited judicial exception into a practical application of the exception. The additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Secondly, this is also because the claim fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
While the claims recite one or more computers, one or more storage devices, and client devices, these elements are recited with a very high level of generality, the computers and storage devices storing instructions merely being recited to perform the abstract idea without any particularity. Specification paragraph [0069] discloses computers suitable for the execution of a computer program can be based on general or special purpose microprocessors, or both, including one or more mass storage devices, such as magnetic, magneto-optical disks, or optical disks. As such, it can be seen that these elements are generic computing components, and merely serve to provide a general link to a computing environment. The client device is disclosed in paragraph [0019-20] disclose the client device as including a processor in communication with input/output devices, and being a mobile device, including a smartphone, laptop, tablet, etc. The generic description of the devices show that the client devices are also generic devices that merely represent the client to generally link the client to the computing environment. The machine learning model is also recited with a very high level of generality, merely being recited as being used to perform the step of converting the alphanumeric text into an embedding. However, the machine learning model is not recited beyond simply performing the converting. There is no recitation or disclosure of how the machine learning operates or functions. Furthermore, specification paragraph [0073] merely discloses that the machine learning models can be implemented using a machine learning framework including a TensorFlow framework, a Microsoft Cognitive Toolkit framework, etc. Similarly, the encoder blocks are only recited in specification paragraph [0048], which does not disclose any detail regarding the encoder blocks except they are used to train the context-aware ranker model. It is evident that the encoder blocks are merely applied to the abstract idea to provide an output of the scores, and does not affect or change how computers or machine learning functions. As such, any generic machine learning algorithm is merely applied to the abstract idea to provide an output, but the actual functionality or operation of machine learning models are not changed or claimed.
In view of the above, under Step 2A (prong 2), claim 9 does not integrate the recited exception into a practical application (see again: MPEP 2106.04(d)).
Step 2B:
Returning to representative claim 9, taken individually or as a whole, the additional elements of claim 9 do not provide an inventive concept (i.e. whether the additional elements amount to significantly more than the exception itself). As noted above, the additional elements recited in representative claim 9 are recited in a generic manner with a high level of generality and only serve to implement the abstract idea on a generic computing device. The claims result only in an improved abstract idea itself and do not reflect improvements to the functioning of a computer or another technology or technical field. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed process ultimately amount to no more than the mere instructions to apply the exception using a generic computer and/or no more than a general link to a technological environment.
Even when considered as an ordered combination, the additional elements of claim 9 do not add anything further than when they are considered individually.
In view of the above, representative claim 9 does not provide an inventive concept under step 2B, and is ineligible for patenting.
Regarding Claim 2 (method): Claim 2 recites at least substantially similar concepts and elements as recited in claim 9 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 2 is rejected under at least similar rationale as provided above regarding claim 9.
Regarding Claim 16 (non-transitory computer-readable medium): Claim 16 recites at least substantially similar concepts and elements as recited in claim 9 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 16 is rejected under at least similar rationale as provided above regarding claim 9.
Dependent claims 3-8, 10-15, and 17-21 recite further complexity to the judicial exception (abstract idea) of claim 9, such as by further defining the algorithm for predicting search terms based on past search terms. Thus, each of claims 3-8, 10-15, and 17-21 are held to recite a judicial exception under Step 2A (Prong 1) for at least similar reasons as discussed above.
Under prong 2 of step 2A, the additional elements of dependent claims 3-8, 10-15, and 17-21 also do not integrate the abstract idea into a practical application, considered both individually or as a whole. More specifically, dependent claims 3-8, 10-15, and 17-21 rely on at least similar elements as recited in claim 9. Further additional elements are also acknowledged; however, the additional elements of claims 3-8, 10-15, and 17-21 are recited only at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than the mere instructions to implement or apply the abstract idea on generic computing hardware (or, merely uses a computer as a tool to perform an abstract idea). Further, the additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use (such as the Internet or computing networks).
Secondly, this is also because the claims fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
Taken individually and as a whole, dependent claims 3-8, 10-15, and 17-21 do not integrate the recited judicial exception into a practical application of the exception under step 2A (prong 2).
Lastly, under step 2B, claims 3-8, 10-15, and 17-21 also fail to result in “significantly more” than the abstract idea under step 2B. The dependent claims recite additional functions that describe the abstract idea and use the computing device to implement the abstract idea, while failing to provide an improvement to the functioning of a computer, another technology, or technical field. The dependent claims fail to confer eligibility under step 2B because the claims merely apply the exception on generic computing hardware and generally link the exception to a technological environment.
Even when viewed as an ordered combination (as a whole), the additional elements of the dependent claims do not add anything further than when they are considered individually.
Taken individually or as an ordered combination, the dependent claims simply convey the abstract idea itself applied on a generic computer and are held to be ineligible under Steps 2B for at least similar rationale as discussed above regarding claim 9. Thus, dependent claims 3-8, 10-15, and 17-21 do not add “significantly more” to the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-21 are rejected under 35 U.S.C. 103 as being unpatentable by Kumar (US 11,538,060 B2) in view of Su (US 20160188619 A1), and in further view of Lin (US 20200401629 A1).
Regarding Claim 2: Kumar discloses a method comprising:
obtaining, by one or more processors, sequences of search terms received from client devices over a specified period of time, wherein each search term among the sequences of search terms comprises alphanumeric text; (Kumar: col. 17, ln. 54-63 – “activity 625 can further comprise evaluating a past history of one or more past actions of the user and/or other users in a previous browse session and updating the one or more clusters based at least in part on the past history of the one or more past actions of the user and/or other users. In some embodiments, the one or more past actions can comprise one or more other search queries by the user and/or other users”; Kumar: col. 12, ln. 10-25 – “an activity 505 of receiving a search query from a search by a user during a browse session. In some embodiments, the browse session can comprise a time period spent on a website and/or other third party websites. In some embodiments, the time period can be approximately 1 second to approximately 1 hour. In some embodiments, the time period can be the time that the user is logged into a session. In some embodiments, the time period can be from when the user logs into a session to when the user closes a browser. In some embodiments, receiving the search query from the search by the user can comprise receiving the search query during a time window. In some embodiments, the time window can comprise the browse session time period. In some embodiments, the time window can comprise a number of item activity associated with the browse session”).
converting, by a machine learning model invoked by the one or more processors, the alphanumeric text of each given sequence of search terms, among the sequences of search terms, into a corresponding embedding of the given sequence of search terms that differs from the alphanumeric text of the given sequence of search terms; (Kumar: col. 16, ln. 39-59 – “creating a text corpus similar to the text corpus described above in method 500, the text corpus comprising the search query of the user of the plurality of users and/or an item activity associated with the browse session (e.g., the text clicks received in activity 605). In various embodiments, activity 615 can further comprise an activity of determining an item vector representation representing the item set and/or determining a keyword vector representation representing the search set, similar to method 500 described above. In many embodiments, a natural language model can be used to determine the item vector representation representing the item set and/or the keyword vector representation representing the search set. In some embodiments, the natural language model can use high dimensional embedding for feature representation within the item vector representation representing the item set and/or the keyword vector representation representing the search set. In some embodiments, the high dimensional representation can be tuned to a model causality (e.g., an abstract model that describes causal mechanism of a system)”).
receiving, from a client device, a search term input to a search interface, wherein the search term input includes a most recently input search term; (Kumar: col. 19, ln. 19-28 – “evaluating a user profile associated with the user, evaluating the search query, evaluating one or more user actions during a current browse session of the user, and/or selecting the question from a set of questions. In many embodiments, the current browse session can be referred to as a browse session similar to as described above in method 500. In many embodiments, the one or more user actions can be similar to an item activity as described above. In some embodiments, the one or more user actions can be one or more other search queries by the user and/or other users”).
selecting, by the one or more processors, at least one predicted next search term predicted to follow the most recently input search term based on the inferred embedding; (Kumar: col. 18n ln. 8-11 –“ the intent of the user can be determined to comprise browsing and searching more, and therefore the recommendation can comprise one or more new search term (e.g., search query and/or search topic)”; Kumar: col. 24, ln. 20-24- “predict a user intent of the user in the current browse session based at least in part on high dimensional embedding for search queries and item browse by the user and/or other users”).
outputting, by the one or more processors to the client device, the at least one predicted next search term to the client device as a recommended completion to the search term input. (Kumar: col. 17, ln. 36-39 – “presenting to the user a recommendation. In many embodiments, the recommendation can comprise one or more search terms related to at least one cluster of the one or more clusters”).
Kumar does not explicitly teach a method comprising:
storing, by the one or more processors, each given sequence of search terms in association with the corresponding embedding of the given sequence of search terms;
matching, by the one or more processors, the search term input to a matching sequence of search terms among the sequences of search terms;
determining, by the one or more processors, that the corresponding embedding previously (i) created by the converting of the matching sequence of terms and (ii) stored in association with the matching sequence of search terms is an inferred embedding of the search term input received from the client device based on the matching of the search term input to the matching sequence of search terms, and independent of converting the search term input into a corresponding search term input embedding;
generating a set of candidate next actions based on application of a machine learning model to the inferred embedding obtained based on the matching of the search term input to the matching sequence of search terms;
generating, by the one or more processors, a list of scores for the set of candidate next actions by applying a series of encoder blocks to the set of candidate next actions;
generating, by the one or more processors, at least one predicted next search term predicted to be input through the search interface following the most recently input search term based on the list of scores;
Notably, however, Kumar does disclose using previous actions (search queries) by the user or similar users to recommend search terms to the user (Kumar: col. 17, ln. 36-39), and storing the past user queries (Kumar: col. 21, ln. 18-30).
To that accord, Su does teach a method comprising:
storing, by the one or more processors, each given sequence of search terms in association with the corresponding embedding of the given sequence of search terms; (Su: [0034] – “The storage of query terms may contain individual entries corresponding to particular query terms that were entered by the user and/or other users historically. An individual entry may be associated with a number of attributes including frequencies (e.g., a total number of times during a time period) a historical query term was entered by the user and/or other users, a feature list associated with that corresponding query term. The feature list may specify whether a number of unique entities (e.g., phrases, terms, topics, categories or any other entities) appear in a search context of that historical query term. In some implementations, to improve processing efficiency, a signature of the feature list may be stored in association with the historical query term”).
matching, by the one or more processors, the search term input to a matching sequence of search terms among the sequences of search terms; (Su: [0046] – “an incomplete query term entered by a user in a search session may be received. The incomplete query term may indicate a partially entered sequence of a query term intended by the user for inquiring about related information”; Su: [0050] – “a set of one or more candidate query terms may be obtained for suggestion to aid the user to complete the query. The operation at 220 may include obtaining a number of query terms that were entered by the user and/or other users similar to the user historically. In implementations this may involve examining query terms that were entered by the user and/or other users similar to the user, narrowing in on those query terms that contain the prefix in the incomplete query term received at 210, and select a set of candidate query terms for suggestion based on their frequencies appearing in the historical searches”; Su: [0052] – “a similarity between each of the candidate query terms obtained at 220 and the query terms obtained at 230 may be determined. The operation(s) at 240 may include, determining a similarity between a candidate query term and each of the query terms obtained at 230”).
determining, by the one or more processors, that the corresponding embedding previously (i) created by the converting of the matching sequence of terms and (ii) stored in association with the matching sequence of search terms is an inferred embedding of the search term input received from the client device based on the matching of the search term input to the matching sequence of search terms, and independent of converting the search term input into a corresponding search term input embedding; (Su: [0094] – “for each candidate query term, an overall degree of similarity between the candidate query term and the query terms in the same search session as the incomplete query term received at 1302 may be determined. This may involve 1) determining a similarity between a given candidate query term and each of the query terms in the same search session as the incomplete query term; and 2) generating an aggregated similarity for the given candidate query term by aggregating the similarities determined in 1). In some implementations, the aggregated similarity may be generated based on a function of the individual similarities determined in 1). For example, the aggregate similarity may be generated using one of a summation function, a weighted sum function, an average function, and any other function(s). For instance, in the case where the similarity between two query terms is determined using a distance e.g., a hamming distance between LSH signatures of the two query terms”; Su: [0092] – “LSH signatures of the candidate query terms may be received. It is noted in some situations, a LSH signature cannot be computed for a candidate query term obtained at 1304. For instance, a query candidate term obtained at 1304 may be newly entered by the user in a search session such that a feature vector described above has not been generated for the query candidate term offline yet”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Kumar disclosing the system for creating embeddings of past search queries to predict search queries of a current session with the storing of the search terms in association with the embeddings, matching the search term input to a past sequence of search terms, and determining the embedding is an inferred embedding based on the matching as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to account for query context to improve relevance of suggestions (Su: [0009]).
Kumar in view of Su does not explicitly teach a method comprising:
generating a set of candidate next actions based on application of a machine learning model to the inferred embedding obtained based on the matching of the search term input to the matching sequence of search terms;
generating, by the one or more processors, a list of scores for a set of candidate next actions by applying a series of encoder blocks to the set of candidate next actions;
generating, by the one or more processors, at least one predicted next search term predicted to be input through the search interface following the most recently input search term based on the list of scores;
Notably, however, Kumar does disclose predicting user intent and suggesting search terms based on the embedding (Kumar: col. 24, ln. 20-24; col. 23, ln. 1-3), and Su does teach determining candidate query terms, and not determining LSH signatures for newly entered search terms (Su: [0050]; [0092]).
To that accord, Lin does disclose a method comprising:
generating a set of candidate next actions based on application of a machine learning model to the inferred embedding obtained based on the matching of the search term input to the matching sequence of search terms; (Lin: [0045] – “a first plurality of queries associated with the one or more first keywords may be determined based upon the one or more first keywords and/or a historical query database. In some examples, the historical query database may comprise a plurality of historical queries comprising the first plurality of queries”; Lin: [0047] – “an exemplary query of the plurality of historical queries may be selected for inclusion in the first plurality of queries associated with the one or more first keywords based upon a determination that one or more characters and/or one or more keywords of the exemplary query match and/or are related to the one or more first keywords”; Lin: [0067] – “A first plurality of representations (e.g., one or more of vector representations, embeddings, word embeddings, etc.) may be generated based upon the plurality of sequences of queries (and/or a second plurality of sequences of queries of the historical query database associated with search sessions different than the plurality of search sessions)”).
generating, by the one or more processors, a list of scores for a set of candidate next actions by applying a series of encoder blocks to the set of candidate next actions; (Lin: [0063] – “positions of queries in a query sequence pair may be analyzed to generate the exemplary relationship score associated with the exemplary query. For example, a query sequence pair that comprises an initial query associated with and/or matching the one or more first keywords and comprises a next query associated with and/or matching the exemplary query may be indicative of a search associated with the exemplary query being performed after and/or directly after a search associated with the one or more first keywords is performed”; Lin: [0067] – “the plurality of relationship scores may be generated using a machine learning model. The machine learning model may have an encoder decoder architecture (e.g., a sequence-to-sequence (Seq2Seq) architecture) and/or a different machine learning architecture. In some examples, the machine learning model may be trained using the plurality of sequences of queries associated with the plurality of search sessions”).
selecting, by the one or more processors, at least one predicted next search term predicted to be input through the search interface following the most recently input search term based on the list of scores; (Lin: [0091] – “a first list of suggested queries associated with the one or more first keywords may be generated based upon the first plurality of queries, the plurality of relationship scores”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Kumar in view of Su disclosing the system for creating embeddings of past search queries to predict search queries of a current session with the storing of the search terms in association with the embeddings, matching the search term input to a past sequence of search terms, and determining the embedding is an inferred embedding based on the matching as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to provide more informative query suggestions and search results based upon query sequence pairs (Lin: [0122]).
Regarding Claim 3: Kumar in view of Su and Lin discloses the limitations of claim 2 above.
Kumar does not explicitly teach wherein determining that the corresponding embedding stored in association with the matching sequence of search terms is performed independent of invoking the machine learning model to convert the search term input to a bit vector. Notably, however, Kumar does disclose using previous actions (search queries) by the user or similar users to recommend search terms to the user (Kumar: col. 17, ln. 36-39), and storing the past user queries (Kumar: col. 21, ln. 18-30).
To that accord, Su does teach wherein determining that the corresponding embedding stored in association with the matching sequence of search terms is performed independent of invoking the machine learning model to convert the search term input to a bit vector. (Su: [0092] – “LSH signatures of the candidate query terms may be received. It is noted in some situations, a LSH signature cannot be computed for a candidate query term obtained at 1304. For instance, a query candidate term obtained at 1304 may be newly entered by the user in a search session such that a feature vector described above has not been generated for the query candidate term offline yet”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Kumar disclosing the system for using past search queries to predict search queries of a current sessions with the determining the corresponding embedding with the matching sequence is performed independent invoking machine learning for converting the search input as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to account for query context with a prefix (Su: [0009]).
Regarding Claim 4: Kumar in view of Su and Lin discloses the limitations of claim 2 above.
Kumar does not explicitly teach wherein determining that the corresponding embedding stored in association with the matching sequence of search terms is performed without generating an embedding using the search term input. Notably, however, Kumar does disclose using previous actions (search queries) by the user or similar users to recommend search terms to the user (Kumar: col. 17, ln. 36-39), and storing the past user queries (Kumar: col. 21, ln. 18-30).
To that accord, Su does teach wherein determining that the corresponding embedding stored in association with the matching sequence of search terms is performed without generating an embedding using the search term input. (Su: [0092] – “LSH signatures of the candidate query terms may be received. It is noted in some situations, a LSH signature cannot be computed for a candidate query term obtained at 1304. For instance, a query candidate term obtained at 1304 may be newly entered by the user in a search session such that a feature vector described above has not been generated for the query candidate term offline yet”). In summary, the LSH of the terms are not available if they are newly entered, and also does not generate a vector for the incomplete search term.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Kumar disclosing the system for using past search queries to predict search queries of a current sessions with the determining the corresponding embedding with the matching sequence is performed independent of converting the search input as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to account for query context with a prefix (Su: [0009]).
Regarding Claim 5: Kumar in view of Su and Lin discloses the limitations of claim 2 above.
Kumar does not explicitly teach wherein converting the alphanumeric text comprises converting the alphanumeric text into a corresponding bit-vector of a specified length. Notably, however, Kumar does disclose generating a vector of the user actions, including search history (Kumar: col. 21, ln. 36-51).
To that accord, Su does teach wherein converting the alphanumeric text comprises converting the alphanumeric text into a corresponding bit-vector of a specified length. (Su: [0066] – “compressing an M dimensional feature vector into a LSH signature of length 6. As shown, 6 planes may be selected in an M-dimensional space 700. The two dots, 702a and 702b, shown in FIG. 7 represent two M dimensional vectors placed into the M-dimensional space. As can be seen, the dots 702a and 702b may be compressed into LSH signatures 704a and 704b respectively”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Kumar disclosing system for using past search queries to predict search queries of a current sessions with a bit-vector of a specified length as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to compress the information into a multidimensional space (Su: [0066]).
Regarding Claim 6: Kumar in view of Su and Lin discloses the limitations of claim 2 above.
Kumar does not explicitly teach a method comprising:
determining a measure of similarity between the search term input and each of the sequences of search terms based on a distance in a multi-dimensional space between the search term input and each of the sequences of search terms;
identifying the matching sequence of terms based on the measure of similarity.
Notably, however, Kumar does disclose predicting user intent and suggesting search terms based on the embedding (Kumar: col. 24, ln. 20-24).
To that accord, Su does teach a method comprising:
determining a measure of similarity between the search term input and each of the sequences of search terms based on a distance in a multi-dimensional space between the search term input and each of the sequences of search terms; (Su: [0083] – “search session of interest may be the search session with respect to an incomplete query term having a prefix based on which query suggestion(s) will be made. At 1106, an aggregated distance between the candidate query term and the query terms in the search session of interest may be determined for each candidate query term. The distance determined at 1106 may include a cosine distance, a hamming distance, a Euclidean distance, and/or any other type of distances between vectors”; Su: [0062] – “a “feature” of a search term may be referred to as a word, a phrase, a sequence of letters”).
identifying the matching sequence of terms based on the measure of similarity. (Su: [0083] – “By using LSH signatures, the re-rankings of the candidate query terms may be carried out, at least partially, based on the hamming distance as replacement of the cosine similarity calculation used in the case where feature list vector is used”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Kumar disclosing the system for using past search queries to predict search queries of a current sessions with the determining the corresponding embedding with the similarity measure based on a distance in a multi-dimensional space as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to indicate the overall similarity of all the candidate search terms (Su: [0083]).
Regarding Claim 7: Kumar in view of Su and Lin discloses the limitations of claim 2 above.
Kumar further discloses a method comprising:
generating, by a generative machine learning model, candidate search term; (Kumar: col. 19, ln. 37-44 – “the user profile can comprise a past history of the user. In some embodiments, method 700 can further comprise an activity of evaluating a past history of purchases by the user and/or other users. In some embodiments, the past history can comprise a browse history, a search history”).
identifying, from among the candidate search terms, a set of candidate search terms that are included in a specified number of highest frequency search terms; (Kumar: col. 17, ln. 12-35 – “the minimum length of time can comprise 1 time unit (e.g., the time unit can be correlated to a number of seconds or minutes the user spent in the browse session). In some embodiments, the probability of a user performing the given action can be based at least in part on a density estimate. In some embodiments, the density can be estimated at point “x” according to the following formula:
p(x)=(k*a)/(v*n),
wherein, “v” is a volume of hypercube surrounding “x”, “n” is a total number of points, “k” is a number of query points inside “v”, “a” is a number of items out of “m” number of items the user has interacted with (e.g., item activity) during the browse session that are present inside “v.” In many embodiments, a total density p{x} can be calculated for all “m” items. In some embodiments, a highest density within p{x} can be selected and the candidate queries can be returned”).
generating, for each candidate search term in the set of candidate search terms, an embedding representing the candidate search term; (Kumar: col. 21, ln. 36-51 – “extracting one or more correlated signals related to the one or more user actions of the user of the one or more users based at least in part on the one or more user action types to determine one or more independent signals related to the one or more user actions of the user of the one or more users. In one embodiment, activity 815 can comprise using a Mahalanobis transformation Σ.sup.−1/2 a.sub.i to transform a vector a.sub.i, wherein a.sub.i is the vector of the one or more user actions).
obtaining, from a user device, a current search term; (Kumar: col. 19, ln. 20-24 – “evaluating one or more user actions during a current browse session of the user, and/or selecting the question from a set of questions. In many embodiments, the current browse session can be referred to as a browse session”).
Kumar does not explicitly teach a method comprising:
storing the generated embeddings for the set of the candidate search terms in a database with the set of the candidate search terms;
matching the current search term to a matching candidate search term among the set of the candidate next search terms;
selecting the stored embedding of the matching candidate search term as an inferred embedding of the current search term based on the matching and within real-time constraint following receipt of the current search term from the user device.
Notably, however, Kumar does disclose retrieving user profiles that include a past history of the user, including search history (Kumar: col. 19, ln. 37-44).
To that accord, Su does teach a method comprising:
storing the generated embeddings for the set of the candidate search terms in a database with the set of the candidate search terms; (Su: [0034] – “The storage of query terms may contain individual entries corresponding to particular query terms that were entered by the user and/or other users historically. An individual entry may be associated with a number of attributes including frequencies (e.g., a total number of times during a time period) a historical query term was entered by the user and/or other users, a feature list associated with that corresponding query term. The feature list may specify whether a number of unique entities (e.g., phrases, terms, topics, categories or any other entities) appear in a search context of that historical query term. In some implementations, to improve processing efficiency, a signature of the feature list may be stored in association with the historical query term”).
matching the current search term to a matching candidate search term among the set of the candidate next search terms; (Su: [0046] – “an incomplete query term entered by a user in a search session may be received. The incomplete query term may indicate a partially entered sequence of a query term intended by the user for inquiring about related information”; Su: [0050] – “a set of one or more candidate query terms may be obtained for suggestion to aid the user to complete the query. The operation at 220 may include obtaining a number of query terms that were entered by the user and/or other users similar to the user historically. In implementations this may involve examining query terms that were entered by the user and/or other users similar to the user, narrowing in on those query terms that contain the prefix in the incomplete query term received at 210, and select a set of candidate query terms for suggestion based on their frequencies appearing in the historical searches”; Su: [0052] – “a similarity between each of the candidate query terms obtained at 220 and the query terms obtained at 230 may be determined. The operation(s) at 240 may include, determining a similarity between a candidate query term and each of the query terms obtained at 230”).
selecting the stored embedding of the matching candidate search term as an inferred embedding of the current search term based on the matching and within real-time constraint following receipt of the current search term from the user device. (Su: [0094] – “an overall degree of similarity between the candidate query term and the query terms in the same search session as the incomplete query term received at 1302 may be determined. This may involve 1) determining a similarity between a given candidate query term and each of the query terms in the same search session as the incomplete query term; and 2) generating an aggregated similarity for the given candidate query term by aggregating the similarities”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Kumar disclosing the system for using past search queries to predict search queries of a current sessions with storing the embeddings, matching the currents search term with the candidate search terms, and selecting the stored embedding matching the search term as an inferred embedding within real-time as taught by Su. One of ordinary skill in the art would have been motivated to do so in order to account for query context to improve relevance of suggestions (Su:0009]).
Regarding Claim 8: Kumar in view of Su and Lin discloses claim 7 above.
Kumar further discloses a method comprising:
generating a ranker model that predicts a likelihood of each candidate search term leading to one or more actions at the user device; (Kumar: col. 17, ln. 4-9 – “a recurrent neural network model (e.g., a long short-term memory recurrent neural network architecture) to predict a next action (e.g., item activity or search query) the user can perform, given one or more actions (e.g., item activity, search query, past history, and/or past actions) during the browse session. In many embodiments, method 600 can predict a probability of a user performing a given action in view of the user's previous action; col. 18, ln. 12-18 – “a recurrent neural network model (e.g., a long short-term memory recurrent neural network architecture) to predict a next action (e.g., item activity or search query) the user can perform, given one or more actions (e.g., item activity, search query, past history, and/or past actions) during the browse session. In many embodiments, method 600 can predict a probability of a user performing a given action in view of the user's previous actions”).
obtaining a score, for each candidate search term, based on the ranker model and the inferred embedding of the current search term; (Kumar: col. 17, ln. 4-35 – “to predict a next action (e.g., item activity or search query) the user can perform, given one or more actions (e.g., item activity, search query, past history, and/or past actions) during the browse session. In many embodiments, method 600 can predict a probability of a user performing a given action in view of the user's previous actions. In some embodiments, the browse session can be divided into one or more chunks “N,” with each chunk “N” having a minimum length of time. In some embodiments, the minimum length of time can comprise 1 time unit (e.g., the time unit can be correlated to a number of seconds or minutes the user spent in the browse session). In some embodiments, the probability of a user performing the given action can be based at least in part on a density estimate. In some embodiments, the density can be estimated at point “x” according to the following formula:
p(x)=(k*a)/(v*n),
wherein, “v” is a volume of hypercube surrounding “x”, “n” is a total number of points, “k” is a number of query points inside “v”, “a” is a number of items out of “m” number of items the user has interacted with (e.g., item activity) during the browse session that are present inside “v.” In many embodiments, a total density p{x} can be calculated for all “m” items. In some embodiments, a highest density within p{x} can be selected and the candidate queries can be returned (e.g., recommended in activity 625, described below). Similarly, in some embodiments, a probability score of the user performing one or more actions (e.g., item activity) can be determined for searches at a time “t,” given the one or more actions (e.g., item activity) the user has performed at time “t−1.””).
providing, for output on a user interface, the candidate search term that exceeds a predefined threshold as a predicted next action, wherein the predicted next action is identified and output within a real-time constraint after entry of the current search term. (Kumar: col. 18, ln. 12-23 – “the probability score for one or more potential queries or recommendations (e.g., recommended in activity 625, described below) can be used to re-rank the one or more potential queries or recommendations (e.g., recommended in activity 625, described below). In some embodiments, a recommended query with a highest probability score can be ranked first, and therefore recommended first. In some embodiments, only recommendations with a probability score that reaches or exceeds a predetermined threshold can be presented to the user”).
Regarding Claims 9 and 16: Claims 9 and 16 recite substantially similar limitations as claim 2. Therefore, claims 9 and 16 are rejected under the same rationale as claim 2 above.
Regarding Claims 10 and 17: Claims 10 and 17 recite substantially similar limitations as claim 3. Therefore, claims 10 and 17 are rejected under the same rationale as claim 3 above.
Regarding Claims 11 and 18: Claims 11 and 18 recite substantially similar limitations as claim 4. Therefore, claims 11 and 18 are rejected under the same rationale as claim 4 above.
Regarding Claims 12 and 19: Claims 12 and 19 recite substantially similar limitations as claim 5. Therefore, claims 12 and 19 are rejected under the same rationale as claim 5 above.
Regarding Claims 13 and 20: Claims 13 and 20 recite substantially similar limitations as claim 6. Therefore, claims 13 and 20 are rejected under the same rationale as claim 6 above.
Regarding Claims 14 and 21: Claims 14 and 21 recite substantially similar limitations as claim 7. Therefore, claims 14 and 21 are rejected under the same rationale as claim 7 above.
Regarding Claim 15: Claim 15 recites substantially similar limitations as claim 8. Therefore, claim 15 is rejected under the same rationale as claim 8 above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY J KANG whose telephone number is (571)272-8069. The examiner can normally be reached Monday - Friday: 7:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Maria-Teresa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.K./ Examiner, Art Unit 3689
/VICTORIA E. FRUNZI/ Primary Examiner, Art Unit 3689 2/11/2026