Prosecution Insights
Last updated: April 19, 2026
Application No. 18/932,484

PRESENTATION OF RELATED AND CORRECTED QUERIES FOR A SEARCH ENGINE

Non-Final OA §103
Filed
Oct 30, 2024
Examiner
MARI VALCARCEL, FERNANDO MARIANO
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Home Depot Product Authority LLC
OA Round
2 (Non-Final)
49%
Grant Probability
Moderate
2-3
OA Rounds
3y 10m
To Grant
71%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
71 granted / 145 resolved
-6.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
40 currently pending
Career history
185
Total Applications
across all art units

Statute-Specific Performance

§101
13.5%
-26.5% vs TC avg
§103
66.1%
+26.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The present application is recognized as a continuation of parent Application No. 16/032,311, filed on 7/11/2018. Response to Amendment This action is in response to applicant’s arguments and amendments filed 1/26/2026, which are in response to USPTO Office Action mailed 10/23/2025. Applicant’s arguments have been considered with the results that follow: THIS ACTION IS MADE NON-FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-24, 29-34 and 39-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chechik (US Patent No.: 9,594,851; Date of Patent: Mar. 14, 2017) in view of FERNANDEZ et al. (US PGPUB No. 2017/0085509; Pub. Date: Mar. 23, 2017). Regarding independent claim 21, Chechik discloses a method comprising: training a machine learning model on a set of pairs, each pair comprising (i) a respective prior search query and (ii) a composite vector that describes a respective document and is responsive to the respective prior search query; See FIG. 3 & Col. 4, lines 27-28, (Disclosing a system for identifying a pair comprising a pair comprising a document visited and a subsequent query, wherein the subsequent query is submitted after visiting the document visited. The system comprises environment 100 which includes a query suggestion rule trainer used to train a query suggestion rule via the method of FIG. 3 comprising step 305 of receiving data pairs.) See Col. 7, lines 35-39, (Data pairs 305 refers to documents visited and subsequent queries and may originate in at least one log 135 of past document views and past queries submitted to one or more search engines. Note Col. 8, lines 3-17 wherein the system generates representations of document content and query tokens as vectors, i.e. training a machine learning model on a set of pairs, each pair comprising (i) a respective prior search query and (ii) a composite vector that describes a respective document and is responsive to the respective prior search query;) receiving a current search query; See Col. 3, line 63 - Col. 4, line 10, (Users may interact with search engine 150 using client computing devices 110, 112 to execute applications such as web browsers 120, 122 that allow users to formulate complete queries and submit them to search engine 150, i.e. receiving a current search query;) and generating a response to the current search query, the response comprising one or more prior search queries to which the one or more of the composite vectors were responsive. See FIG. 6 & Col. 13, lines 30-39, (FIG. 6 illustrates a method comprising step 630 wherein the system may send a selected set of suggested queries to a user based on scoring of the content of the document visited against previously logged subsequent queries related to the same or similar content using a trained query suggestion rule, i.e. generating a response to the current search query, the response comprising one or more prior search queries to which the one or more of the composite vectors were responsive.) Chechik does not disclose the step of converting, with the trained machine learning model, an embedded representation of the current search query into a document vector in a vector space; selecting one or more of the composite vectors that are within a predetermined distance of the document vector in the vector space; FERNANDEZ discloses the step of converting, with the trained machine learning model, an embedded representation of the current search query into a document vector in a vector space; See FIG. 3 & Paragraph [0049], (Disclosing a method of stripping/filtering and distribution of news social media content. FIG. 3 illustrates Word Vector Computational model 321 implementing a vector space model of semantics based on a Skip-gram Word2Vec method that determines document similarities by comparing cosine distances between a plurality of word vectors and an original query vector wherein the query is represented as the same kind of vector as the documents, i.e. converting, with the trained machine learning model, an embedded representation of the current search query into a document vector in a vector space.) selecting one or more of the composite vectors that are within a predetermined distance of the document vector in the vector space; See FIG. 3 & Paragraph [0049], (The method determines a cosine distance between word vectors to determine relevance rankings of words in a keyword search, i.e. selecting one or more of the composite vectors that are within a predetermined distance of the document vector in the vector space (e.g. the vectors having the smallest cosine distances are determined to be relevant ).) Chechik and FERNANDEZ are analogous art because they are in the same field of endeavor, vector-based search systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Chechik to include the method of using a vector space model to calculate vector representations of documents and determine a distance indicating a relevance between a query vector and a vector representing a document as disclosed by FERNANDEZ. Paragraph [0046] of FERNANDEZ discloses that the system is configured to apply a plurality of computational models in the process of transforming and eliminated repeated and/or irrelevant social media content to deliver content to a user that may be useful or relevant to their interests. In the case of the Word Vector Model 321, the system may determine the most relevant documents associated with an input query based on a vector similarity. Regarding dependent claim 22, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step wherein the current search query is received before the current search query is provided to a search engine. See Col. 4, lines 3-10, (Computing devices 110, 112 execute applications such as web browsers 120, 122 that allow users to formulate queries and submit them to search engine 150, i.e. wherein the current search query is received before the current search query is provided to a search engine (e.g. the search query is formulated by the user in the web browser prior to being received at search engine 150).) Regarding dependent claim 23, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step wherein the set of pairs is based on historical user data, See Col. 4, lines 21-24, (Log files 135 include timestamp and session identification data that facilitate grouping of documents viewed and subsequent queries within time windows or by user session, i.e. wherein the set of pairs is based on historical user data (e.g. user session data over time is maintained in log files 135).) wherein each pair comprises (i) a respective prior search query input by a user and (ii) a respective composite vector describing a respective document selected by the user responsive to the respective prior search query. See Col. 6, lines 53-67, (The system may determine pairs of documents visited and subsequent queries, wherein the pairs are determined from at least one log file 135 of past document visits and queries subsequently submitted to search engines 150. Data pairs may be determined based on a log file including entries for when a document is first visited, when a search is submitted, when a search result page is visited, when a result is selected, when a result document is visited, etc., i.e. wherein each pair comprises (i) a respective prior search query input by a user (e.g. log file 135 entries include subsequent queries) and (ii) a respective composite vector describing a respective document selected by the user responsive to the respective prior search query (e.g. log files 135 entries include information about a visited document which may be represented as vectors used to train a query suggestion rule ).) Regarding dependent claim 24, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step of responsive to the current search query, causing a user device to generate an interface comprising the one or more prior search queries. See Col. 14, lines 19-23, (The system may send suggested queries to computing device 110, 112 which may then forward the suggested queries to browser 120, 1220 for display such as the "Suggested queries" display element of FIG. 2, i.e. responsive to the current search query (e.g. the document being displayed as in FIG. 2 is displayed in response to a user input query), causing a user device to generate an interface comprising the one or more prior search queries (e.g. the system determines relevant suggested queries associated with the currently displayed document associated with an input search query).) Regarding dependent claim 29, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step of training the machine learning model on the set of pairs. See Col. 6, lines 53-59, (The system may determine pairs of documents visited and subsequent queries from at least one log file 135. Data pairs are used as positive or negative training examples to train the query suggestion rule, i.e. training the machine learning model on the set of pairs.) Regarding dependent claim 30, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step of receiving a selection of one of the prior search queries; See FIG. 2 & Col. 5, line 65 - Col. 6, line 3, (FIG. 2 illustrates a graphical user interface displaying suggested queries in a second window 240.) See Col. 6, lines 31-32, (A user of application 120, 122 can selected one of the suggested queries presented in the second window 240, i.e. receiving a selection of one of the prior search queries;) executing a search with a search engine on the selected prior search query; See Col.6, line 34-38, (The selected suggested query is submitted to search engine 150 via network 140, i.e. executing a search with a search engine on the selected prior search query;) and returning a set of documents that are responsive to the selected prior search query. See Col. 4, lines 6-14, (Search engine 150 receives queries from computing devices 110,112 (e.g. such as via a user selection of a suggested query as in the interface of FIG. 2), wherein search engine 150 executes the queries against a content database 160 and generates search results to computing devices 110, 112 in a form that can be presented to users, i.e. returning a set of documents that are responsive to the selected prior search query.) Regarding independent claim 31, The claim is analogous to the subject matter of independent claim 21 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 32, The claim is analogous to the subject matter of dependent claim 22 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 33, The claim is analogous to the subject matter of dependent claim 23 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 34, The claim is analogous to the subject matter of dependent claim 24 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 39, The claim is analogous to the subject matter of dependent claim 29 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 40, The claim is analogous to the subject matter of dependent claim 30 directed to a computer system and is rejected under similar rationale. Claim(s) 25 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chechik in view of FERNANDEZ as applied to claim 21 above, and further in view of Mehrotra et al. (US PGPUB No. 2014/0229473; Pub. Date; Aug. 14, 2014). Regarding dependent claim 25, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik further discloses the step wherein each composite vector comprises: a feature vector model portion based on one or more features of an entity that are included in the respective document; See Col. 8, line 63 - Col. 9, line 5, (A document can be represented as a term vector using a bag of words approach that may include words or n-gram sequences of words, terms, characters or other selected tokens, i.e. a feature vector model portion based on one or more features of an entity that are included in the respective document;) Chechik-FERNANDEZ does not disclose a description vector model portion calculated based on a narrative description of the entity that is included in the respective document; and an image vector model portion based on an image of the entity that is included in the respective document. Mehrotra discloses a description vector model portion calculated based on a narrative description of the entity that is included in the respective document; See Paragraph [0061], (Disclosing a method for determining documents nearest to a query wherein each document within a database of documents includes a set of vectors. Each vector describing information contained in a document which may include text used to describe the document from which features can be extracted, i.e. a description vector model portion calculated based on a narrative description (e.g. the text used to describe the document) of the entity that is included in the respective document;) and an image vector model portion based on an image of the entity that is included in the respective document. See Paragraph [0061], (A vector of the set of vectors may include an image within the document from which features can be extracted, i.e. and an image vector model portion based on an image of the entity that is included in the respective document.) Chechik, FERNANDEZ and Mehrotra are analogous art because they are in the same field of endeavor, methods and systems for processing queries. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Chechik-FERNANDEZ to include the set of vectors used to describe documents as disclosed by Mehrotra. Doing so would allow the system to determine a query result by analyzing document vectors representing a variety of features including text, images, etc. Paragraph [0060] of Mehrotra further disclosing that document features are used to determine which documents fall within a specified search radius that represents accurate or acceptable results to a user query. Regarding dependent claim 35, The claim is analogous to the subject matter of dependent claim 25 directed to a computer system and is rejected under similar rationale. Claim(s) 26-27 and 36-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chechik (US Patent No.: 9,594,851; Date of Patent: Mar. 14, 2017) in view of FERNANDEZ as applied to claim 21 above, and further in view of TOMMY et al. (US PGPUB No. 2018/0225274; Pub. Date: Aug. 9, 2018) Regarding dependent claim 26, As discussed above with claim 21, Chechik-FERNANDEZ discloses all of the limitations. Chechik-FERNANDEZ does not disclose the step of determining that the current search query includes a spelling error; and determining a corrected current search query by correcting the spelling error. TOMMY discloses the step of determining that the current search query includes a spelling error; See Paragraph [0023], (Disclosing a spell checker for performing spell checks of an input query text wherein word recommendations are provided for a given word by inserting, deleting, substituting or rotating letters of a word, i.e. determining that the current search query includes a spelling error;) and determining a corrected current search query by correcting the spelling error. See Paragraph [0023], (Word recommendations are provided for a given word by inserting, deleting, substituting or rotating letters of a word, i.e. determining a corrected current search query by correcting the spelling error (e.g. the corrections indicate the corrected search query).) Chechik, FERNANDEZ and TOMMY are analogous art because they are in the same field of endeavor, methods and systems for processing queries using machine learning techniques. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Chechik-FERNANDEZ to include the method of performing a spell check as described by TOMMY. Doing so would allow the system to determine correct, incorrect and complex sentences that have grammatical errors. Paragraph [0017] of TOMMY describes that the disclosed method is able to determine or assess the lack of connection between words by using a recurrent neural network that learns to construct sentences by selecting words in sequence. Regarding dependent claim 27, As discussed above with claim 26, Chechik-FERNANDEZ-TOMMY discloses all of the limitations. TOMMY further discloses the step wherein determining that the current search query includes a spelling error comprises comparing the current search query to a library of n-tuple word mappings. See Paragraph [0023], (Individual words in the input text are compared with a dictionary stored in a database to determine a closest recommended word for each word in the input text.) See Paragraph [0055], (Recurrent neural networks-based system 100 performs a comparison of each word from each sentence of an input text with words from a dictionary stored in a database to determine a closest recommended word for each word in the input text, i.e. wherein determining that the current search query includes a spelling error comprises comparing the current search query to a library of n-tuple word mappings (e.g. the input text is compared to the contents of a dictionary via a neural network that determines matches between the contents of the dictionary and text elements).) Chechik, FERNANDEZ and TOMMY are analogous art because they are in the same field of endeavor, methods and systems for processing queries using machine learning techniques. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Chechik-FERNANDEZ to include the method of performing a spell check as described by TOMMY. Doing so would allow the system to determine correct, incorrect and complex sentences that have grammatical errors. Paragraph [0017] of TOMMY describes that the disclosed method is able to determine or assess the lack of connection between words by using a recurrent neural network that learns to construct sentences by selecting words in sequence. Regarding dependent claim 36, The claim is analogous to the subject matter of dependent claim 26 directed to a computer system and is rejected under similar rationale. Regarding dependent claim 37, The claim is analogous to the subject matter of dependent claim 27 directed to a computer system and is rejected under similar rationale. Claim(s) 28 and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chechik in view of FERNANDEZ and TOMMY as applied to claim 27 above, and further in view of Ortega et al. (US Patent No. 6,144,958; Date of Patent: Nov. 7, 2000). Regarding dependent claim 28, As discussed above with claim 27, Chechik-FERNANDEZ-TOMMY discloses all of the limitations. Chechik further discloses the step wherein: the prior search queries included in the set of pairs comprises a first set of prior search queries; See FIG. 2 & Col. 5, line 65 - Col. 6, line 3, (FIG. 2 illustrates a graphical user interface displaying suggested queries in a second window 240, i.e. wherein: the prior search queries included in the set of pairs comprises a first set of prior search queries (e.g. Note FIG. 6 & Col. 13, lines 30-39 wherein step 630 illustrated in FIG. 6 comprises sending a selected set of suggested queries to a user based on scoring of the content of the document visited against previously logged subsequent queries related to the same or similar content using a trained query suggestion rule);) Chechik-FERNANDEZ-TOMMY does not disclose the step wherein the library of n-tuple word mappings comprises a second set of prior search queries comprising a plurality of properly-spelled search queries. Ortega discloses the step wherein the library of n-tuple word mappings comprises a second set of prior search queries comprising a plurality of properly-spelled search queries. See Col. 5, lines 17-26, (Disclosing a search engine that uses correlations between search terms to correct misspelled terms within search queries. The system utilizes a correlation table 50 which contains or reflects historical information about frequencies with which specific search terms have appeared together within a same search query. Historical information is included in the spell correction process. Note Col. 3, lines 14-17 wherein search term correlation data is based on historical query submissions, i.e. the library of n-tuple word mappings comprises a second set of prior search queries comprising a plurality of properly-spelled search queries.) Chechik, FERNANDEZ, TOMMY and Ortega are analogous art because they are in the same field of endeavor, query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Chechik-FERNANDEZ-TOMMY to include the method of performing a spell check according to previously submitted queries as disclosed by Ortega. Col. 5, lines 22-26 of Ortega discloses that incorporating historical information into the spell-checking process allows the system to more accurately replace terms with terms that are intended by the user. Regarding dependent claim 38, The claim is analogous to the subject matter of dependent claim 28 directed to a computer system and is rejected under similar rationale. Response to Arguments Applicant’s arguments with respect to the rejection(s) of claim(s) 21 and 31 under 35 USC 102(a)(2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of FERNANDEZ et al. (US PGPUB No. 2017/0085509; Pub. Date: Mar. 23, 2017). Regarding independent claim 21, Applicant argues Chechik does not disclose the following limitation(s) of independent claim 1: a method comprising: training a machine learning model on a set of pairs, each pair comprising (i) a respective prior search query and (ii) a composite vector that describes a respective document and is responsive to the respective prior search query; The examiner respectfully disagrees, Col. 8, lines 3-9 of Chechik discloses that the system generates feature representations of both document content as well as query tokens. While the term “subsequent queries” is utilized, Col. 13, lines 35-39 of Chechik make clear that the machine learning model is trained to suggest queries based at least in part on a “scoring of content of the document visited against previously logged subsequent queries related to the same or similar content as the content of the document visited” Therefore, the trained query suggestion rule necessarily is trained on document vectors associated with previously logged subsequent query vectors. Once a query suggestion rule is trained, the “subsequent queries” are referred to as “previously logged subsequent queries”, which would correspond to a prior search query, i.e. training a machine learning model on a set of pairs, each pair comprising (i) a respective prior search query (e.g. subsequent queries are submitted after visiting the document at a previous point in time to a current user request, hence the concept of “previously logged subsequent queries” ) and (ii) a composite vector that describes a respective document and is responsive to the respective prior search query (e.g. a previously logged subsequent query would be associated with a visited document as part of the process of training a query suggestion rule.) and generating a response to the current search query, the response comprising one or more prior search queries to which the one or more of the composite vectors were responsive. The examiner respectfully disagrees, Col. 13, lines 35-39 of Chechik discloses that the system utilizes similarity scores generated by a query suggestion rule to determine relevant query suggestions. The query suggestions are selected based in part on scoring of content in the document visited against previously logged subsequent queries related to the same or similar content as the content of the document visited using a trained query suggestion rule. While Chechik discloses an element of suggesting queries based on a document visited, it is not the only metric relied upon to generate the suggested queries. Specifically, the system utilizes a scoring rule which utilizes feature representations of query suggestions to which are selected based at least in part on scoring of content of the document visited against previously logged subsequent queries, i.e. generating a response to the current search query (e.g. Note Col. 5, lines 62-64 wherein users may submit queries that are used to visit or view documents ), the response comprising one or more prior search queries to which the one or more of the composite vectors were responsive (e.g. the scoring rules utilize feature representations of suggested queries to determine which suggested queries are most responsive to the document visited wherein the document visited is selected in response to a user query).) Therefore, Chechik discloses the above limitations. However, the examiner agrees that Chechik does not disclose the step(s) of: converting, with the trained machine learning model, an embedded representation of the current search query into a document vector in a vector space; selecting one or more of the composite vectors that are within a predetermined distance of the document vector in the vector space; Which required the current rejection of claim 21 under 35 USC 103 presented above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FMMV/Examiner, Art Unit 2159 /ALBERT M PHILLIPS, III/Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Oct 30, 2024
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Jan 26, 2026
Response Filed
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591588
CATEGORICAL SEARCH USING VISUAL CUES AND HEURISTICS
2y 5m to grant Granted Mar 31, 2026
Patent 12547593
METHOD AND APPARATUS FOR SHARING FAVORITE
2y 5m to grant Granted Feb 10, 2026
Patent 12505129
Distributed Database System
2y 5m to grant Granted Dec 23, 2025
Patent 12499123
ACTOR-BASED INFORMATION SYSTEM
2y 5m to grant Granted Dec 16, 2025
Patent 12499121
REAL-TIME MONITORING AND REPORTING SYSTEMS AND METHODS FOR INFORMATION ACCESS PLATFORM
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
49%
Grant Probability
71%
With Interview (+22.0%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month