Prosecution Insights
Last updated: April 19, 2026
Application No. 18/643,675

MACHINE LEARNING MODELS FOR SESSION-BASED RECOMMENDATIONS

Non-Final OA §101§103
Filed
Apr 23, 2024
Examiner
MITROS, ANNA MAE
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
37%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
56 granted / 153 resolved
-15.4% vs TC avg
Strong +49% interview lift
Without
With
+49.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
35 currently pending
Career history
188
Total Applications
across all art units

Statute-Specific Performance

§101
39.1%
-0.9% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 153 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of Claims • The following is an office action in response to the communication filed 04/23/2024. • Claims 1-20 are currently pending and have been examined. Information Disclosure Statement Information Disclosure Statement received 09/10/2024 has been reviewed and considered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. First, it is determined whether the claims are directed to a statutory category of invention. See MPEP 2106.03(II). In the instant case, claims 1-10 are directed to a medium, and claims 11-20 are directed to a method. Therefore, claims 1-20 are directed to statutory subject matter under Step 1 of the Alice/Mayo test (Step 1: YES). The claims are then analyzed to determine if the claims are directed to a judicial exception. See MPEP 2106.04. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong 1 of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong 2 of Step 2A). See MPEP 2106.04. Claim 1 recites at least the following limitations that are believed to recite an abstract idea: generate a set of first embeddings, wherein individual embeddings of the set of first embeddings represent individual items of a set of items; generate one or more second embeddings based at least on the set of first embeddings, the one or more second embeddings computed based at least on a first portion of an ordered sequence of user interactions with the set of items; compute a cosine similarity loss representing a similarity between the one or more second embeddings and a third embedding that represents a next item in the ordered sequence occurring after the first portion; and adjust a model to compute the one or more second embeddings based at least on the cosine similarity loss. Claim 12 recites at least the following limitations that are believed to recite an abstract idea: generate, based on a set of first embeddings corresponding to a set of items, one or more second embeddings based at least on a first portion of an ordered sequence of user interactions with the set of items; and adjust a model to compute the one or more second embeddings based at least on a cosine similarity loss representing a similarity between the one or more second embeddings and a third embedding that represents a next item in the ordered sequence occurring after the first portion. Claim 20 recites at least the following limitations that are believed to recite an abstract idea: updating one or more parameters to generate a recommendation from a set of items based at least on a cosine similarity loss that represents a similarity between one or more embeddings representing a portion of an ordered sequence of user interactions with the set of items and an embedding that represents a next item in the ordered sequence occurring after the portion. The above limitations recite the concept of recommending items based on a similarity between information. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in the MPEP, in that they recite commercial or legal interactions such as advertising, marketing, or sales activities or behaviors. Specifically, the providing of recommendations represents marketing and sales behaviors. This is illustrated in paragraph [0015] of the Specification, discussing the invention pertaining to product recommendations. Further, these limitations, under their broadest reasonable interpretation, fall within the “Mental Processes” grouping of abstract ideas, enumerated in the MPEP, in that they recite concepts performed in the human mind, including observations, evaluations, judgments, and opinions. Specifically, the analysis of data to determine recommendations are observations, evaluations, and judgements. These limitations are similar to the mental process of collecting information, analyzing it, and displaying certain results of the collection and analysis. Accordingly, under Prong One of Step 2A of the MPEP, claims 1, 12, and 20 recite an abstract idea (Step 2A, Prong One: YES). Under Prong Two of Step 2A of the MPEP, claims 1, 12, and 20 recite additional elements, such as a processor, one or more processing units, a machine learning model, a system comprising: one or more processing units. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. As such, these computer-related limitations are not found to be sufficient to integrate the abstract idea into a practical application. Although these additional computer-related elements are recited, claims 1, 12, and 20 merely invoke such additional elements as a tool to perform the abstract idea. Implementing an abstract idea on a generic computer is not indicative of integration into a practical application. Similar to the limitations of Alice, claims 1, 12, and 20 merely recite a commonplace business method (i.e., recommending items based on a similarity between information) being applied on a general purpose computer. See MPEP 2106.05(f). Furthermore, claims 1, 12, and 20 generally link the use of the abstract idea to a particular technological environment or field of use. The courts have identified various examples of limitations as merely indicating a field of use/technological environment in which to apply the abstract idea, such as specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer (see FairWarning v. Iatric Sys.). Likewise, claims 1, 12, and 20 specifying that the abstract idea of recommending items based on a similarity between information is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer. As such, under Prong Two of Step 2A of the MPEP, when considered both individually and as a whole, the limitations of claims 1, 12, and 20 are not indicative of integration into a practical application (Step 2A, Prong Two: NO). Since claims 1, 12, and 20 recite an abstract idea and fail to integrate the abstract idea into a practical application, claims 1, 12, and 20 are “directed to” an abstract idea (Step 2A: YES). Next, under Step 2B, the claims are analyzed to determine if there are additional claim limitations that individually, or as an ordered combination, ensure that the claim amounts to significantly more than the abstract idea. See MPEP 2106.05. The instant claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for at least the following reasons. Returning to independent claims 1, 12, and 20, these claims recite additional elements, such as a processor, one or more processing units, a machine learning model, a system comprising: one or more processing units. As discussed above with respect to Prong Two of Step 2A, although additional computer-related elements are recited, the claims merely invoke such additional elements as a tool to perform the abstract idea. See MPEP 2106.05(f). Moreover, the limitations of claims 1, 12, and 20 are manual processes, e.g., receiving information, sending information, etc. The courts have indicated that mere automation of manual processes is not sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)(I)). Furthermore, as discussed above with respect to Prong Two of Step 2A, claims 1, 12, and 20 merely recite the additional elements in order to further define the field of use of the abstract idea, therein attempting to generally link the use of the abstract idea to a particular technological environment, such as the Internet or computing networks (see Ultramercial, Inc. v. Hulu, LLC. (Fed. Cir. 2014); Bilski v. Kappos (2010); MPEP 2106.05(h)). Similar to FairWarning v. Iatric Sys., claims 1, 12, and 20 specifying that the abstract idea of recommending items based on a similarity between information is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claim to the computer field, i.e., to execution on a generic computer. Even when considered as an ordered combination, the additional elements do not add anything that is not already present when they are considered individually. In Alice Corp., the Court considered the additional elements “as an ordered combination,” and determined that “the computer components…‘[a]dd nothing…that is not already present when the steps are considered separately’ and simply recite intermediated settlement as performed by a generic computer.” Id. (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Similarly, viewed as a whole, claims 1, 12, and 20 simply convey the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B of the Alice/Mayo test, there are no meaningful limitations in claims 1, 12, and 20 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (Step 2B: NO). Dependent claims 2-11 and 13-19, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. Dependent claims 2-11 and 13-19 further fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in the MPEP, in that they recite commercial or legal interactions such as advertising, marketing, or sales activities or behaviors. Further, these claims, under their broadest reasonable interpretation, fall within the “Mental Processes” grouping of abstract ideas, enumerated in the MPEP, in that they recite concepts performed in the human mind, including observations, evaluations, judgments, and opinions. Dependent claims 2, 4-9, 11, 13-14, and 16-18 fail to identify additional elements and as such, are not indicative of integration into a practical application. Dependent claims 3, 10, 15, and 19 further identify additional elements, such as a large language model, a user interface, a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for three-dimensional assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational artificial intelligence (AI) operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system implementing one or more vision language models (VLMs); a system for performing generative AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources, one or more language models, one or more multilingual large language models, and one or more vision language models. Similar to discussion above the with respect to Prong Two of Step 2A, although additional computer-related elements are recited, the claims merely invoke such additional elements as a tool to perform the abstract idea. See MPEP 2106.05(f). As such, under Step 2A, dependent claims 2-11 and 13-19 are “directed to” an abstract idea. Similar to the discussion above with respect to claims 1, 12, and 20, dependent claims 2-11 and 13-19 analyzed individually and as an ordered combination, invoke such additional elements as a tool to perform the abstract idea and merely indicate a field of use in which to apply the abstract idea because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer, and therefore, do not amount to significantly more than the abstract idea itself. See MPEP 2106.05(f)(2). Accordingly, under the Alice/Mayo test, claims 1-20 are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5-12, 14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (US 20230315781 A1), hereafter Huang, in view of Bull et al. (US 20210056168 A1), hereinafter Bull. In regards to claim 1, Huang discloses a processor comprising: one or more processing units to: (Huang: [0066] and Fig. 10) generate a set of first embeddings, wherein individual embeddings of the set of first embeddings represent individual items of a set of items (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”); generate one or more second embeddings based at least on the set of first embeddings, the one or more second embeddings computed based at least on a first portion of an ordered sequence of user interactions with the set of items (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”); compute a loss representing a similarity between the one or more second embeddings and a third embedding that represents a next item in the ordered sequence occurring after the first portion (Huang: [0077-0078] and Fig. 13 – “method 1300 may further generate an item embedding representing a ground-truth next click image at 1310…a loss, such as a contrastive loss, may be generated between the personalized item embedding and the item embedding generated for the ground-truth next click at 1310”; [0008] – “a method may include identifying the item of the one or more items as a query item, identifying one or more items occurring before the query item as historical items, and identifying an item occurring after the query item as a ground-truth next click item”); and adjust a machine learning model to compute the one or more second embeddings based at least on the loss (Huang: [0078] and Fig. 13 – “Accordingly, one or more models may be adjusted based on the calculated loss”; [0056] – “During the training of the user-attention-item model 600A, one or more model parameters…may be modified based on minimizing an error, or loss, between the item embedding 623 generated for the ground-truth next click image 618 and the personalized item embedding 628; in examples, a contrastive loss function 626 may be used to for such basis”; [0057-0058] and Fig. 7 – “obtaining a personalized item embedding 728 based on a trained attention user-item-model 700…process 300 uses a machine learning technique that is based on a contrastive loss”; [0059] – “the item embeddings 722 may be weighted and combined to generate a weighted item embedding 724, which may be the same as or similar to the weighted item embedding 338”). Huang further discloses contrastive loss functions (Huang: [0056]), yet Huang does not explicitly disclose that a loss is a cosine similarity loss. However, Bull teaches a similar embedding system (Bull: [0001]), including that a loss is a cosine similarity loss (Bull: [0048] – “the loss function calculates a cosine similarity between a focus vector and another vector representing the training example”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the cosine similarity loss of Bull in the system of Huang because Huang already discloses a loss and Bull is merely demonstrating that the loss may be a cosine similarity loss. Additionally, it would have been obvious to have included that a loss is a cosine similarity loss as taught by Bull because cosine similarity losses are well-known and the use of it in an embedding system would have increased speed, accuracy, and recall (Bull: [0015]). In regards to claim 5, Huang/Bull teaches the system of claim 1. Huang further discloses the one or more second embeddings and the third embedding that represents the next item in the ordered sequence (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”); and the one or more second embeddings and a subset of embeddings from the set of first embeddings (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”; [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user”). Yet Huang does not explicitly disclose compute the cosine similarity loss based at least on a positive cosine similarity component and a negative cosine similarity component; wherein the positive cosine similarity component is computed based at least on a function of a first cosine similarity representing a similarity between second embeddings and the third embedding; and wherein the negative cosine similarity component is computed based at least on a function of a second cosine similarity representing a similarity between second embeddings and a subset of randomly selected embeddings. However, Bull teaches a similar embedding system (Bull: [0001]), including compute the cosine similarity loss based at least on a positive cosine similarity component and a negative cosine similarity component; wherein the positive cosine similarity component is computed based at least on a function of a first cosine similarity representing a similarity between second and third embeddings (Bull: [0004] – “the one or more training examples for the one or more concepts include one or more positive training examples, and one or more negative training examples, and the loss function is optimized by modifying one or more vectors of the plurality of vectors to minimize the loss function for positive training examples and to maximize the loss function for negative training examples. By optimizing the loss function with both positive and negative examples, a vector space model can be generated in which vectors of related concepts are close to each other while also being distant from unrelated concepts. In some embodiments, an output of the loss function for each concept is proportional to a first cosine similarity of a vector for the concept and a first mean vector, wherein the first mean vector is a mean of one or more vectors for parent concepts of the concept and one or more vectors for child concepts of the concept…the vector space model may…enabling concept embeddings”); and wherein the negative cosine similarity component is computed based at least on a function of a second cosine similarity representing a similarity between second embeddings and a subset of randomly selected embeddings (Bull: [0004] – “the one or more training examples for the one or more concepts include one or more positive training examples, and one or more negative training examples, and the loss function is optimized by modifying one or more vectors of the plurality of vectors to minimize the loss function for positive training examples and to maximize the loss function for negative training examples. By optimizing the loss function with both positive and negative examples, a vector space model can be generated in which vectors of related concepts are close to each other while also being distant from unrelated concepts. In some embodiments, an output of the loss function for each concept is proportional to a first cosine similarity of a vector for the concept and a first mean vector, wherein the first mean vector is a mean of one or more vectors for parent concepts of the concept and one or more vectors for child concepts of the concept…the vector space model may…enabling concept embeddings”; [0033] – “Training module 150 may randomly initialize values for one or more features of each vector”; [0012] – “Each vector may represent the embedding”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 1. In regards to claim 6, Huang/Bull teaches the system of claim 5. Yet Huang does not explicitly disclose wherein the subset of randomly selected embeddings comprises a plurality of embeddings. However, Bull teaches a similar embedding system (Bull: [0001]), including wherein the subset of randomly selected embeddings comprises a plurality of embeddings (Bull: [0047] – “Vectors for the concepts are initialized using random values at operation 340. Training module 150 may create a multi-dimensional vector for each concept in an ontology and may initialize the dimensions of each vector with random values”; [0012] – “Each vector may represent the embedding”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 1. In regards to claim 7, Huang/Bull teaches the system of claim 5. Yet Huang does not explicitly disclose select the subset of randomly selected embeddings based at least in part on a size of the set of items. However, Bull teaches a similar embedding system (Bull: [0001]), including select the subset of randomly selected embeddings based at least in part on a size of the set of items (Bull: [0033] – “Training module 150 may randomly initialize values for one or more features of each vector. A vector may include a plurality of dimensions, each of which represent a feature. In some embodiments, vectors are one-hot vectors in which each dimension has a specific meaning”; [0047] – “Vectors for the concepts are initialized using random values at operation 340. Training module 150 may create a multi-dimensional vector for each concept in an ontology and may initialize the dimensions of each vector with random values”; [0012] – “Each vector may represent the embedding”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 1. In regards to claim 8, Huang/Bull teaches the system of claim 5. Yet Huang does not explicitly disclose iteratively adjust the machine learning model to maximize the positive cosine similarity component and minimize the negative cosine similarity component. However, Bull teaches a similar embedding system (Bull: [0001]), including iteratively adjust the machine learning model to maximize the positive cosine similarity component and minimize the negative cosine similarity component (Bull: [0033] – “The loss function may be optimized by iteratively adjusting the values for each vector in order to minimize the output value of the loss function when a vector is compared to positive training examples, and to maximize the output value for the loss function when a vector is compared to negative training examples”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 1. In regards to claim 9, Huang/Bull teaches the system of claim 5. Huang further discloses generate the one or more second embeddings further based at least on a second portion of the ordered sequence, wherein the second portion overlaps in part with the first portion (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”). In regards to claim 10, Huang/Bull teaches the system of claim 1. Huang further discloses cause a user interface to display an item recommendation from the set of items based at least on performing a nearest neighbor search between at least one embedding of the one or more second embeddings and the set of first embeddings (Huang: [0035] and Fig. 1B – “The web-scale personalized visual search system 120 may receive a query 122 from a user, where the query 122 includes an image. Base on the query 122, candidate results are obtained via index selection using an Approximated Nearest Neighbor (ANN) search”; [0046] – “The item-item embeddings can be assumed to exist within a Euclidean space such that a nearest neighbor operation can be performed. In examples, the item embedding 332 may be provided to the weighted item embedding generator 334”; [0033] – “assists a user in discovering, exploring, and/or engaging with visual search recommendation results”; see also [0066]). In regards to claim 11, Huang/Bull teaches the system of claim 1. Huang further discloses wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for three-dimensional assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational artificial intelligence (AI) operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system implementing one or more vision language models (VLMs); a system for performing generative AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Huang: [0037] – “the engagement-based ranker 130 may also include content-based features, such as Deep Neural Network (DNN) embeddings”; [0093] and Fig. 17 – “the aspects and functionalities described herein may operate over distributed systems (e.g., cloud-based computing systems)…various processing functions may be operated remotely from each other over a distributed computing network”; [0033] and Fig. 1A – “assists a user in discovering, exploring, and/or engaging with visual search recommendation results, such as images, in real-time”). The examiner notes that the claim is such that only one element of the list need be disclosed or taught. In regards to claim 12, Huang discloses a system comprising: one or more processing units to: (Huang: [0066] and Fig. 10) generate, based on a set of first embeddings corresponding to a set of items, one or more second embeddings based at least on a first portion of an ordered sequence of user interactions with the set of items (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”); and adjust a machine learning model to compute the one or more second embeddings based at least on a loss representing a similarity between the one or more second embeddings and a third embedding that represents a next item in the ordered sequence occurring after the first portion (Huang: [0077-0078] and Fig. 13 – “method 1300 may further generate an item embedding representing a ground-truth next click image at 1310…a loss, such as a contrastive loss, may be generated between the personalized item embedding and the item embedding generated for the ground-truth next click at 1310…“Accordingly, one or more models may be adjusted based on the calculated loss”; [0008] – “a method may include identifying the item of the one or more items as a query item, identifying one or more items occurring before the query item as historical items, and identifying an item occurring after the query item as a ground-truth next click item”; [0056] – “During the training of the user-attention-item model 600A, one or more model parameters…may be modified based on minimizing an error, or loss, between the item embedding 623 generated for the ground-truth next click image 618 and the personalized item embedding 628; in examples, a contrastive loss function 626 may be used to for such basis”; [0057-0058] and Fig. 7 – “obtaining a personalized item embedding 728 based on a trained attention user-item-model 700…process 300 uses a machine learning technique that is based on a contrastive loss”; [0059] – “the item embeddings 722 may be weighted and combined to generate a weighted item embedding 724, which may be the same as or similar to the weighted item embedding 338”). Huang further discloses contrastive loss functions (Huang: [0056]), yet Huang does not explicitly disclose that a loss is a cosine similarity loss. However, Bull teaches a similar embedding system (Bull: [0001]), including that a loss is a cosine similarity loss (Bull: [0048] – “the loss function calculates a cosine similarity between a focus vector and another vector representing the training example”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the cosine similarity loss of Bull in the system of Huang because Huang already discloses a loss and Bull is merely demonstrating that the loss may be a cosine similarity loss. Additionally, it would have been obvious to have included that a loss is a cosine similarity loss as taught by Bull because cosine similarity losses are well-known and the use of it in an embedding system would have increased speed, accuracy, and recall (Bull: [0015]). In regards to claim 14, Huang/Bull teaches the system of claim 12. Huang further discloses generate the one or more second embeddings based at least on the first portion of an ordered sequence and a second portion of the ordered sequence, wherein the second portion overlaps in part with the first portion (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”). In regards to claim 17, Huang/Bull teaches the system of claim 12. Huang further discloses the one or more second embeddings and the third embedding that represents the next item in the ordered sequence (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”); and the one or more second embeddings and a set of embeddings from the set of first embeddings (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”; [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user”). Yet Huang does not explicitly disclose compute a positive cosine similarity component of the cosine similarity loss based at least on a first cosine similarity computed using the second and third embeddings; compute a negative cosine similarity component of the cosine similarity loss based at least on a second cosine similarity computed using second embeddings and a set of randomly selected embeddings; and iteratively adjust the machine learning model to maximize the positive cosine similarity component and minimize the negative cosine similarity component. However, Bull teaches a similar embedding system (Bull: [0001]), including compute a positive cosine similarity component of the cosine similarity loss based at least on a first cosine similarity computed using the second and third embeddings (Bull: [0004] – “the one or more training examples for the one or more concepts include one or more positive training examples, and one or more negative training examples, and the loss function is optimized by modifying one or more vectors of the plurality of vectors to minimize the loss function for positive training examples and to maximize the loss function for negative training examples. By optimizing the loss function with both positive and negative examples, a vector space model can be generated in which vectors of related concepts are close to each other while also being distant from unrelated concepts. In some embodiments, an output of the loss function for each concept is proportional to a first cosine similarity of a vector for the concept and a first mean vector, wherein the first mean vector is a mean of one or more vectors for parent concepts of the concept and one or more vectors for child concepts of the concept…the vector space model may…enabling concept embeddings”); compute a negative cosine similarity component of the cosine similarity loss based at least on a second cosine similarity computed using second embeddings and a set of randomly selected embeddings (Bull: [0004] – “the one or more training examples for the one or more concepts include one or more positive training examples, and one or more negative training examples, and the loss function is optimized by modifying one or more vectors of the plurality of vectors to minimize the loss function for positive training examples and to maximize the loss function for negative training examples. By optimizing the loss function with both positive and negative examples, a vector space model can be generated in which vectors of related concepts are close to each other while also being distant from unrelated concepts. In some embodiments, an output of the loss function for each concept is proportional to a first cosine similarity of a vector for the concept and a first mean vector, wherein the first mean vector is a mean of one or more vectors for parent concepts of the concept and one or more vectors for child concepts of the concept…the vector space model may…enabling concept embeddings”; [0033] – “Training module 150 may randomly initialize values for one or more features of each vector”; [0012] – “Each vector may represent the embedding”); and iteratively adjust the machine learning model to maximize the positive cosine similarity component and minimize the negative cosine similarity component (Bull: [0033] – “The loss function may be optimized by iteratively adjusting the values for each vector in order to minimize the output value of the loss function when a vector is compared to positive training examples, and to maximize the output value for the loss function when a vector is compared to negative training examples”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 12. In regards to claim 18, Huang/Bull teaches the system of claim 17. Yet Huang does not explicitly disclose select the set of randomly selected embeddings based at least in part on a size of the set of items. However, Bull teaches a similar embedding system (Bull: [0001]), including select the set of randomly selected embeddings based at least in part on a size of the set of items (Bull: [0033] – “Training module 150 may randomly initialize values for one or more features of each vector. A vector may include a plurality of dimensions, each of which represent a feature. In some embodiments, vectors are one-hot vectors in which each dimension has a specific meaning”; [0047] – “Vectors for the concepts are initialized using random values at operation 340. Training module 150 may create a multi-dimensional vector for each concept in an ontology and may initialize the dimensions of each vector with random values”; [0012] – “Each vector may represent the embedding”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventions to combine Bull with Huang for the reasons identified above with respect to claim 12. In regards to claim 19, Huang/Bull teaches the system of claim 12. Huang further discloses wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for three-dimensional assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational artificial intelligence (AI) operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system implementing one or more vision language models (VLMs); a system for generating synthetic data; a system for performing generative AI operations; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Huang: [0037] – “the engagement-based ranker 130 may also include content-based features, such as Deep Neural Network (DNN) embeddings”; [0093] and Fig. 17 – “the aspects and functionalities described herein may operate over distributed systems (e.g., cloud-based computing systems)…various processing functions may be operated remotely from each other over a distributed computing network”; [0033] and Fig. 1A – “assists a user in discovering, exploring, and/or engaging with visual search recommendation results, such as images, in real-time”). The examiner notes that the claim is such that only one element of the list need be disclosed or taught. In regards to claim 20, Huang discloses a method comprising: (Huang: [0066] and Fig. 10) updating one or more parameters of a machine learning model to generate a recommendation from a set of items based at least on a loss that represents a similarity between one or more embeddings representing a portion of an ordered sequence of user interactions with the set of items and an embedding that represents a next item in the ordered sequence occurring after the portion (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”; [0077-0078] and Fig. 13 – “method 1300 may further generate an item embedding representing a ground-truth next click image at 1310…a loss, such as a contrastive loss, may be generated between the personalized item embedding and the item embedding generated for the ground-truth next click at 1310…“Accordingly, one or more models may be adjusted based on the calculated loss”; [0008] – “a method may include identifying the item of the one or more items as a query item, identifying one or more items occurring before the query item as historical items, and identifying an item occurring after the query item as a ground-truth next click item”; [0056] – “During the training of the user-attention-item model 600A, one or more model parameters…may be modified based on minimizing an error, or loss, between the item embedding 623 generated for the ground-truth next click image 618 and the personalized item embedding 628; in examples, a contrastive loss function 626 may be used to for such basis”; [0057-0058] and Fig. 7 – “obtaining a personalized item embedding 728 based on a trained attention user-item-model 700…process 300 uses a machine learning technique that is based on a contrastive loss”; [0059] – “the item embeddings 722 may be weighted and combined to generate a weighted item embedding 724, which may be the same as or similar to the weighted item embedding 338”). Huang further discloses contrastive loss functions (Huang: [0056]), yet Huang does not explicitly disclose that a loss is a cosine similarity loss. However, Bull teaches a similar embedding system (Bull: [0001]), including that a loss is a cosine similarity loss (Bull: [0048] – “the loss function calculates a cosine similarity between a focus vector and another vector representing the training example”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the cosine similarity loss of Bull in the system of Huang because Huang already discloses a loss and Bull is merely demonstrating that the loss may be a cosine similarity loss. Additionally, it would have been obvious to have included that a loss is a cosine similarity loss as taught by Bull because cosine similarity losses are well-known and the use of it in an embedding system would have increased speed, accuracy, and recall (Bull: [0015]). Claims 2, 4, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Huang, in view of Bull, in view of Gou et al. (US 20220005235 A1), hereinafter Gou. In regards to claim 2, Huang/Bull teaches the system of claim 1. Huang further discloses the individual embeddings of the set of first embeddings (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”). Yet Huang does not explicitly disclose generate the set of first embeddings based on associating a randomly generated latent vector to the individual embeddings. However, Gou teaches a similar embedding system (Gou: [0030]), including generate the set of first embeddings based on associating a randomly generated latent vector to the individual embeddings (Gou: [0030] – “the trained self-attention generator takes the same modified sentence feature vector 3024 from SegAttnGAN as input, which is a concatenated vector of z vector (i.e., random latent vector) and text embedding vector (i.e., sentence feature vector 3022)”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the random latent vector of Gou in the system of Huang/Bull because Huang/Bull already discloses an embedding and Gou is merely demonstrating what the embedding may be. Additionally, it would have been obvious to have included generate the set of first embeddings based on associating a randomly generated latent vector to the individual embeddings as taught by Gou because latent vectors are well-known and the use of it in an embedding setting would have improved precision (Gou: [0054]). In regards to claim 4, Huang/Bull teaches the system of claim 1. Huang further discloses compute the one or more second embeddings based at least on the first portion generated using the machine learning model (Huang: [0056] – “The weighted item embedding 624 may then be combined with the user embedding 612 to generate a personalized item embedding 628, where the personalized item embedding 628 may be the same as or similar to the personalized item embedding 320”; [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332, and the search history item embeddings 336 to generate a weighted-average item embedding 338…a weighting for a search history item embedding 336 may be based on a plurality of different user sessions”; [0049] – “one or more user sessions for a user may be defined as a period of time (e.g., seconds, minutes, hours, e.g.) during which a user may search using one or more search queries. That is, a first user 404A may initiate a first query 408, where items clicked by the first user 404A, based on query results obtained from the query 408, may include items 412, 416, and 418. Two items may be considered to be an item-item pair if the items are clicked, or otherwise selected, within the same session by the user. Continuing with the example of FIG. 4, a session 422 may correspond to a period of time and/or a query 408. If the user 404A were to click or select item A 412, item B 416, and item C 418, then item-item pairs of (A, B), (A, C), and (B, C) can be generated from the first user 404A, as each of the item A 412, item B 416, and item C 418 were selected within the session 422”; see also [0057-0058]). Yet Huang does not explicitly disclose computing based at least on a convolution of the first portion. However, Gou teaches a similar embedding system (Gou: [0030]), including computing based at least on a convolution of the first portion (Gou: [0033] – “the sentence feature vector 3022, the random latent vector z, and the semantic map 3042 resized at 64*64 through several layers of upsampling and convolution operations (e.g., performed by the segmentation attention module 306).”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the convolution of Gou in the system of Huang/Bull because Huang/Bull already discloses an analysis and Gou is merely demonstrating what the analysis may be. Additionally, it would have been obvious to have included computing based at least on a convolution of the first portion as taught by Gou because convolutions are well-known and the use of it in an embedding setting would have improved precision (Gou: [0054]). In regards to claim 13, all the limitations in system claim 13 are closely parallel to the limitations of system claim 4 analyzed above and rejected on the same bases. Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Huang, in view of Bull, in view of Mancuso et al. (US 20240403366 A1), hereinafter Mancuso. In regards to claim 3, Huang/Bull teaches the system of claim 1. Huang further discloses generate the set of first embeddings using the individual embeddings of the set of first embeddings based at least on an input characterizing a corresponding item of the set of items (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”). Yet Huang does not explicitly disclose using a large language model to compute a respective latent vector for embeddings. However, Mancuso teaches a similar embedding system (Mancuso: [0044]), including using a large language model to compute a respective latent vector for embeddings (Mancuso: [0044] – “utilize the large language model 206 to generate embedded representations of content items in a latent feature vector space representing various content topics”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the latent vector of Mancuso in the system of Huang/Bull because Huang/Bull already discloses an embedding and Mancuso is merely demonstrating further information regarding the embedding. Additionally, it would have been obvious to have included using a large language model to compute a respective latent vector for embeddings as taught by Mancuso because latent vectors are well-known and the use of it in an embedding setting would have improved flexibility (Mancuso: [0022]). In regards to claim 15, Huang/Bull teaches the system of claim 15. Huang further discloses generate for individual embeddings of the set of first embeddings based at least on an input characterizing a corresponding item of the set of items (Huang: [0047] – “the weighted item embedding generator 334 may utilize the query item embedding 332 to generate a weighted-average item embedding 338 based on past clicked item embeddings. For example, a search history 322 may be obtained from a computing device 304, (e.g., a search history associated with a user profile of a user). Search history 322 may be provided to the search history image retrieval service 330 which may obtain items or otherwise identify items clicked or selected by the user. That is, the search history 322 may include clicks of items, such as images, which can be used to balance a current query (e.g., based on the image 324) and the past queries. Thus, the item embedding generator 328 may generate search history item embeddings 336 and provide the search history item embeddings 336 to the weighted item embedding generator 334”). Yet Huang does not explicitly disclose use one or more language models to generate a respective latent vector embeddings. However, Mancuso teaches a similar embedding system (Mancuso: [0044]), including use one or more language models to generate a respective latent vector embeddings (Mancuso: [0044] – “utilize the large language model 206 to generate embedded representations of content items in a latent feature vector space representing various content topics”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the latent vector of Mancuso in the system of Huang/Bull because Huang/Bull already discloses an embedding and Mancuso is merely demonstrating further information regarding the embedding. Additionally, it would have been obvious to have included use one or more language models to generate a respective latent vector for embeddings as taught by Mancuso because latent vectors are well-known and the use of it in an embedding setting would have improved flexibility (Mancuso: [0022]). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Huang, in view of Bull, in view of Mancuso, in view of Bulat et al. (US 20240144651 A1), hereinafter Bulat. In regards to claim 16, Huang/Bull/Mancuso teaches the system of claim 15. Yet Huang does not explicitly disclose wherein the one or more language models comprise at least one of: one or more multilingual large language models or one or more vision language models. However, Bulat teaches a similar embedding system (Bulat: [abstract]), including wherein the one or more language models comprise at least one of: one or more multilingual large language models or one or more vision language models (Bulat: [0004] – “training the vision-language ML model”; see also [0047]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the vision language model of Bulat in the system of Huang/Bull/Mancuso because Huang/Bull/Mancuso already discloses a model and Bulat is merely demonstrating further information regarding the model. Additionally, it would have been obvious to have included wherein the one or more language models comprise at least one of: one or more multilingual large language models or one or more vision language models as taught by Bulat because vision language models are well-known and the use of it in an embedding setting would have improved training of models (Bulat: [0003-0004]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. NPL reference U teaches recommendation systems. Embeddings can be used as the foundation for recommendation tasks. Click sessions may be used to build embeddings. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNA MAE MITROS whose telephone number is (571)272-3969. The examiner can normally be reached Monday-Friday from 9:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNA MAE MITROS/Examiner, Art Unit 3689
Read full office action

Prosecution Timeline

Apr 23, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §101, §103
Mar 31, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12549779
METHOD AND SYSTEM FOR PROVIDING CONTENTS
2y 5m to grant Granted Feb 10, 2026
Patent 12536576
METHOD, MEDIUM, AND SYSTEM FOR SCORING IMPROVEMENTS BY TEST FEATURES TO USER INTERACTIONS WITH ITEM GROUPS
2y 5m to grant Granted Jan 27, 2026
Patent 12475497
METHOD, SYSTEM, AND MEDIUM FOR GENERATING RECOMMENDATIONS UTILIZING AN EDGE-COMPUTING-BASED ASYNCHRONOUS COAGENT NETWORK
2y 5m to grant Granted Nov 18, 2025
Patent 12412205
METHOD, SYSTEM, AND MEDIUM FOR AUGMENTED REALITY PRODUCT RECOMMENDATIONS
2y 5m to grant Granted Sep 09, 2025
Patent 12393972
METHOD, SYSTEM, AND MEDIUM FOR ATTRIBUTE-BASED ITEM RANKING DURING A WEB SESSION
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
37%
Grant Probability
86%
With Interview (+49.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 153 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month