Prosecution Insights
Last updated: April 19, 2026
Application No. 18/084,324

MACHINE LEARNING-BASED USER SELECTION PREDICTION BASED ON SEQUENCE OF PRIOR USER SELECTIONS

Final Rejection §102
Filed
Dec 19, 2022
Examiner
MENGISTU, TEWODROS E
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Home Depot Product Authority LLC
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
77%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
62 granted / 127 resolved
-6.2% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
34 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 127 resolved cases

Office Action

§102
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are pending for examination. Claims 1, 8, and 15 are independent. Response to Arguments 3. Applicant's arguments filed 12/17/2025 have been fully considered but they are not fully persuasive. Applicant arguments regarding 35 U.S.C. § 102: Gao does not disclose "training a machine learning model according to a training data set, the training data set comprising a plurality of token sets, each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document, to create a trained model," as recited in Claim 1, because Gao's "browser logs 110" do not disclose the "plurality of token sets," as recited in Claim 1. That is, in Gao, the "browser logs 110" are the documents themselves including the "set of source and target document pairs 120." Gao does not disclose that the "browser logs 110" include the "plurality of token sets," and does not disclose that "each token set [is] representative of a respective document." Even though each document in Gao' s "browser logs 110" may include words, this does not disclose that for each document in the "browser logs 110," that the "browser logs 110" includes the "each token set comprising a plurality of words" for each respective document, as recited in Claim 1. Moreover, Gao does not disclose being trained on the "plurality of token sets," and also does not disclose that "each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document," as recited in amended claim 1, because the "DSM" of Gao is trained on the "source and target document pairs 120." In Gao, each of these "(s,t) pairs" are data that is representative of the source and target documents. Accordingly, these "(s,t) pairs" of Gao do not disclose being the "each token set representative of a respective document," as recited in Claim 1. Examiner response: Examiner respectfully disagrees, claim 1 broadly recites training a machine learning model according to a training dataset with no further details for how exactly the machine learning model is trained. Instead, Claim 1 further details the training dataset. Under broadest reasonable interpretation training a machine learning model is interpreted as providing the machine learning model with data. Claim 1 further recites “the training data set comprising a plurality of token sets”, Gao ‘s browser logs describe documents which are represented with words/terms/n-grams (i.e. comprising tokens) (see para 0028-0029 and 0033-0037 of Gao). Examiner interprets words as tokens. Para 0061 Gao states “A document d, which is a sequence of words, is converted into a vector representation x for the input layer of the network.” The words from the document (i.e. word representations/vectors) are considered token sets that are provided as input to the machine learning model (see Fig 4 of Gao) and represent the respective document that they are derived from. Para 0069 further describes the word vectors as focuses of the document. Gao also states documents as webpages (see para 0027) which also discloses the claim limitation “a respective document accessible through the interface” (also see fig 3 (320)). Applicant further argues: Similarly, in Gao, the "context and optional focus" extracted from the "(s,t) pairs" by the "Context and Focus Extraction Module 130" of Gao also does not disclose the "each token set representative of a respective document accessible through the interface" because the extracted "context and optional focus" of Gao is based on the source and target documents. Accordingly, the "context and optional focus" of Gao does not disclose the "each token set representative of a respective document," as recited in Claim 1. Furthermore, Gao discloses that "the learned DSM 150 of interestingness is then passed to a Feature Extraction Module 200 that generates feature vectors 210 from the output layer of the DSM for source and target documents." (Gao at [0043].) Gao also discloses that the "Entity Extraction Module 300 uses any of a variety of named entity recognizer-based techniques to extract entities (e.g., links, people, places, things, etc.) from an arbitrary source document 310 being consumed by the user to identify context and/or focus 330 in that arbitrary source document." (Gao at [0044].) Gao does not disclose the "the training data set comprising a plurality of token sets, each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document," as recited in Claim 1, because the extraction of the "feature vectors 210" by the "Feature Extraction Module 200" and the extraction of the "entities" by the "Entity Extraction Module 300" described in the corresponding paragraphs of Gao do not disclose being operations performed to generate the "plurality of token sets" that are then used to train Gao' s "DSM." Instead, Gao discloses that the "learned DSM 150" is passed to the "Feature Extraction Module 200" to extract the "feature vectors 210" from source and target documents and that the "Entity Extraction Module 300" extracts the "entities" from the "arbitrary source document 310" that is "being consumed by the user." These extracted "feature vectors 210" and "entities" of Gao are described in the context of applying the "learned DSM 150" (i.e., trained model) to find interesting documents and relevant entities in documents from the user's search, and not in the context of training the "DSM" of Gao. (See also Gao at [0042] ( describing the "DSM Training Module 140" training the "learned DSM 140.".) Therefore, for the above stated reasons, Gao does not disclose "training a machine learning model according to a training data set, the training data set comprising a plurality of token sets, each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document, to create a trained model," as recited in Claim 1. Thus, Gao fails to disclose each and every feature of Claim 1. Examiner response: Examiner respectfully disagrees, Gao describes a context as a predefined size window of words (i.e. token set) for a respective document (see claims 14-15 of Gao). In para 0091 Goa states “the context of a source document s consists of the highlighted text span and its surrounding text defined by a 200-word window (or other size window) covering text before and after the highlighted text.” Examiner interprets this as a word representation/vector (i.e. token set) that represents the respective document, comprises a plurality of words (e.g. 200 words), and each word describes a characteristic of the document. Gao also describes a focus as a subset of words/tokens for a respective document, for example in para 0091 Goa states ““A target document t consists of the plaintext of a webpage. The focus in t is defined as the first 10 tokens or words in t,”. Fig 4(410) Gao also shows how a focus is a word representation/vector (i.e. token set) for a respective document. Examiner interprets this as a word representation/vector (i.e. token set) that represents the respective document, comprises a plurality of words (e.g. 10 words), and each word describes a characteristic of the document. In regards to the arguments stating “extracted "feature vectors 210" and "entities" of Gao are described in the context of applying the "learned DSM 150" (i.e., trained model) to find interesting documents and relevant entities in documents from the user's search, and not in the context of training the "DSM" of Gao.” Examiner respectfully disagrees, because claim 1 does not recite a specific training method or process. Claim 1 broadly recites training a machine learning model according to training data set. Under broadest reasonable interpretation, training could be read as providing the machine learning model with data or training data. In para 0031, Gao states “maps word representations of documents to feature vectors in a latent semantic space (also known as the semantic representation).” Overall, under broadest reasonable interpretation, Gao discloses the claim limitations. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gao et al. (US 2015/0363688 A1, hereinafter "Gao"). Regarding Claim 1 Gao discloses: A method for predicting a next user selection in an electronic user interface ([Abstract, para 0044, 0109] describes predicting target documents that would interest users (i.e. next selection/interest).), the method comprising: training a machine learning model according to a training data set ([Para 0041-0042, and Fig 1] describes training a deep semantic model DSM (i.e. machine learning model) with browser logs (i.e. training data).), the training data set comprising a plurality of token sets, each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document, to create a trained model ([Para 0028-0029, 0044, 0061-0071, 0091-0093, Fig 1 and Fig 3-5] describes extracting words/tokens form a document when training a deep neural network (DSM). Para 0044 and Fig 3 disclose selecting tokens/words from document 310 accessible through user interface module 320. Examiner interprets words from the document as describing context (i.e. characteristics) of the document.); receiving, from a user, a sequence of selections of documents ([Para 0028-0029, 0035-0038, 0044, 0068, 0111, Fig 1 and Fig 3-5] describes extracting/receiving click transitions or context and focus (i.e. selections of document) from documents.); inputting the sequence of selections to the trained model ([Para 0005, 0020-0021, 0028-0029, 0035-0038, 0044, 0068, 0111, Fig 1 and Fig 3-5] describe click transitions and Context and Focus extraction that evaluates selected words from documents and inputs to the DSM model.); and outputting to the user, in response to the sequence of selections, a predicted next document selection according to an output of the trained model ([Para 0025, 0056, 0117] describes the trained DSM predicting target documents (i.e. next document selection) in response to context and focus (i.e. user selection).). Regarding Claim 2 Gao discloses: The method of claim 1, wherein outputting the predicted next document selection comprises one or more of: displaying a link to the predicted next document in response to a user search; displaying a link to the predicted next document in response to a user navigation; or displaying a link to the predicted next document in response to a user click. ([Para 044-0045, 0099, 0103, 0109] describes displaying links to predicted documents based on highlighted text (i.e. user click/navigation).) Regarding Claim 3 Gao discloses: The method of claim 1, wherein each word describes a characteristic of a subject of the document. ([Para 0060-0068, 0071-0072, Fig 1, and Fig 4] describes words from a document and extracting features from the words (i.e. characteristics of a subject). Examiner also interprets words from the document as describing context (i.e. characteristics) of a subject of the document.) Regarding Claim 4 Gao discloses: The method of claim 1, wherein a plurality of the words describing a characteristic of the document are contained in the document. ([Para 0028-0029, 0035-0038, 0044, 0060-0072, 0111, Fig 1 and Fig 3-5] describe a number of words representing context/focus (i.e. characteristics) found within the document.) Regarding Claim 5 Gao discloses: The method of claim 1, wherein training the machine learning model according to the training data set comprises conducting a training round in which token sets are used independent of user selections of documents ([Para 0027, 0061-0067, 0071, 0090-0093, Fig 1 and Fig 5] describes extracting words/tokens to train the DSM model.). Regarding Claim 6 Gao discloses: The method of claim 5, wherein the training round is a first training round, wherein training the machine learning model according to the training data set further comprises conducting a second training round in which token sets are used in conjunction with sequences of user selections of documents ([Para 0005, 0020-0021, 0028-0029, 0035-0038, 0044, 0068, 0111, Fig 1 and Fig 3] describe click transitions and Context and Focus extraction that evaluates selected words from documents and trains the DSM model.). Regarding Claim 7 Gao discloses: The method of claim 1, wherein training the machine learning model according to the training data set comprises conducting a training round in which token sets are used in conjunction with sequences of user selections of documents ([Para 0041-0042, 0092-0093, 0106, Fig 2, and Fig 3-5] describes training the DSM with both word/token sets and context/focus (i.e. user selections).). Regarding Claim 8 Gao discloses: A system comprising: a non-transitory, computer-readable medium storing instructions; and a processor configured to execute the instructions to ([Para 0126-0131 and Fig 6]): (Claim 8 is a system claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground) Regarding Claim 9 (Claim 9 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.) Regarding Claim 10 (Claim 10 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.) Regarding Claim 11 (Claim 11 recites analogous limitations to claim 4 and therefore is rejected on the same ground as claim 4.) Regarding Claim 12 (Claim 12 recites analogous limitations to claim 5 and therefore is rejected on the same ground as claim 5.) Regarding Claim 13 (Claim 13 recites analogous limitations to claim 6 and therefore is rejected on the same ground as claim 6.) Regarding Claim 14 (Claim 14 recites analogous limitations to claim 7 and therefore is rejected on the same ground as claim 7.) Regarding Claim 15 Gao discloses: A method for predicting a next user selection in an electronic user interface ([Abstract, para 0044, 0109] describes predicting target documents that would interest users (i.e. next selection/interest).), the method comprising: training a machine learning model according to a training data set ([Para 0041-0042, and Fig 1] describes training a deep semantic model DSM (i.e. machine learning model) with browser logs (i.e. training data).), the training data set comprising a plurality of token sets, each token set representative of a respective document accessible through the interface, each token set comprising a plurality of words, each word describing a characteristic of the document, to create a trained model ([Para 0028-0029, 0044, 0061-0071, 0091-0093, Fig 1 and Fig 3-5] describes extracting words/tokens form a document when training a deep neural network (DSM). Para 0044 and Fig 3 disclose selecting tokens/words from document 310 accessible through user interface module 320. Examiner interprets words from the document as describing context (i.e. characteristics) of the document.), wherein training the machine learning model according to the training data set comprises: conducting a first training round in which token sets are used independent of user selections of documents ([Para 0027, 0061-0067, 0071, 0090-0093, Fig 1 and Fig 5] describes extracting words/tokens to train the DSM model.); and conducting a second training round in which sequences of user selections of documents are used in conjunction with token sets ([Para 0005, 0020-0021, 0028-0029, 0035-0038, 0044, 0068, 0111, Fig 1 and Fig 3] describe click transitions and Context and Focus extraction that evaluates selected words from documents and trains the DSM model.); and deploying the trained model to output to a user, in response to a sequence of user selections, a predicted next document selection ([Para 0025, 0056, 0117] describes the trained DSM predicting target documents (i.e. next document selection) in response to context and focus (i.e. user selection).). Regarding Claim 16 (Claim 16 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.) Regarding Claim 17 (Claim 17 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.) Regarding Claim 18 (Claim 18 recites analogous limitations to claim 4 and therefore is rejected on the same ground as claim 4.) Regarding Claim 19 Gao discloses: The method of claim 15, wherein both the first training round and the second training round comprise a plurality of epochs ([Para 0088] describes DSM training within epochs.). Regarding Claim 20 Gao discloses: The method of claim 15, wherein the first training round is before the second training round. ([Para 0041-0042, 0092-0093, 0106, Fig 2, and Fig 3-5] describes extracting words/tokens first then evaluating context/focus after.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Parthasarathy et al. (. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEWODROS E MENGISTU whose telephone number is (571)270-7714. The examiner can normally be reached Mon-Fri 9:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULLAH KAWSAR can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TEWODROS E MENGISTU/Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Dec 19, 2022
Application Filed
Sep 16, 2025
Non-Final Rejection — §102
Dec 17, 2025
Response Filed
Mar 30, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566817
AUTOMATIC MACHINE LEARNING MODEL EVALUATION
2y 5m to grant Granted Mar 03, 2026
Patent 12482032
Selective Data Rejection for Computationally Efficient Distributed Analytics Platform
2y 5m to grant Granted Nov 25, 2025
Patent 12450465
NEURAL NETWORK SYSTEM, NEURAL NETWORK METHOD, AND PROGRAM
2y 5m to grant Granted Oct 21, 2025
Patent 12400252
ARTIFICIAL INTELLIGENCE BASED TRANSACTIONS CONTEXTUALIZATION PLATFORM
2y 5m to grant Granted Aug 26, 2025
Patent 12380369
HYPERPARAMETER TUNING IN AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) MODELS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
77%
With Interview (+28.2%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 127 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month