Prosecution Insights
Last updated: April 19, 2026
Application No. 18/428,989

SYSTEMS AND METHODS FOR ITEM RECOMMENDATIONS BASED ON DUAL MODELS

Non-Final OA §101§103
Filed
Jan 31, 2024
Examiner
WEINER, ARIELLE E
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Walmart Apollo LLC
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
97 granted / 229 resolved
-9.6% vs TC avg
Strong +52% interview lift
Without
With
+52.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
40 currently pending
Career history
269
Total Applications
across all art units

Statute-Specific Performance

§101
30.5%
-9.5% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 229 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is in reply to the original application filed on 01/31/2024. Claims 1-20 are rejected. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement Information Disclosure Statement received 01/31/2024 has been reviewed and considered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under Step 1 of the Subject Matter Eligibility Test for Products and Processes, the claims must be directed to one of the four statutory categories (see MPEP 2106.03). All the claims are directed to one of the four statutory categories (YES). Under Step 2A of the Subject Matter Eligibility Test, it is determined whether the claims are directed to a judicially recognized exception (see MPEP 2106.04). Step 2A is a two-prong inquiry. Under Prong 1, it is determined whether the claim recites a judicial exception (YES). Taking Claim 1 as representative, the claim recites limitations that fall within the certain methods of organizing human activity groupings of abstract ideas, including: -a non-transitory memory having instructions stored thereon; and -at least one processor operatively coupled to the non-transitory memory, and configured to read the instructions to: -receive, from a computing device, a recommendation request for recommending items to a customer, -determine, based on the recommendation request, at least one anchor item to be displayed to the customer, -obtain a first machine learning model trained based on [that utilizes] a first product data granularity, -obtain a second machine learning model trained based on [that utilizes] a second product data granularity, -generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items based on the at least one anchor item, and -transmit to the computing device the ranked list of recommended items to be displayed to the customer with the at least one anchor item The above limitations recite the concept of recommending items to a customer. The above limitations fall within the “Certain Methods of Organizing Human Activity” groupings of abstract ideas, enumerated in MPEP 2106.04(a). Certain methods of organizing human activity include: fundamental economic principles or practices (including hedging, insurance, and mitigating risk) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; and business relations) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) The limitation of determine, based on the recommendation request, at least one anchor item to be displayed to the customer, is a process that, under their broadest reasonable interpretation, cover a commercial interaction. For example, “determine” in the context of this claim encompass advertising, and marketing or sales activities. Similarly, the limitations of receive, from a computing device, a recommendation request for recommending items to a customer, obtain a first machine learning model trained based on [that utilizes] a first product data granularity, obtain a second machine learning model trained based on [that utilizes] a second product data granularity, generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items based on the at least one anchor item, and transmit to the computing device the ranked list of recommended items to be displayed to the customer with the at least one anchor item are processes that, under their broadest reasonable interpretation, cover a commercial interaction. That is, other than reciting that the receiving is from a computing device, that the first model is a first machine learning model that’s trained, that the second model is a second machine learning model that’s trained, and that the transmitting is to the computing device, nothing in the claim element precludes the step from practically being performed by people. For example, but for the “computing device,” “first machine learning model,” “trained,” and “second machine learning model,” language, “receive,” “obtain,” “obtain,” “generate,” and “transmit” in the context of this claim encompasses advertising, and marketing or sales activities. Under Prong 2, it is determined whether the claim recites additional elements that integrate the exception into a practical application of the exception. This judicial exception is not integrated into a practical application (NO). -a non-transitory memory having instructions stored thereon; and -at least one processor operatively coupled to the non-transitory memory, and configured to read the instructions to: -receive, from a computing device, a recommendation request for recommending items to a customer, -determine, based on the recommendation request, at least one anchor item to be displayed to the customer, -obtain a first machine learning model trained based on a first product data granularity, -obtain a second machine learning model trained based on a second product data granularity, -generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items based on the at least one anchor item, and -transmit to the computing device the ranked list of recommended items to be displayed to the customer with the at least one anchor item The additional elements of claim 1 are recited at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than mere instructions to implement or apply the abstract idea on a generic computing hardware (or, merely use a computer as a tool to perform an abstract idea) as supported by paragraph [0113] of Applicant’s specification – “Each functional component described herein can be implemented in computer hardware, in program code, and/or in one or more computing systems executing such program code as is known in the art.” Specifically, the additional elements of a non-transitory memory having instructions stored thereon, at least one processor operatively coupled to the non-transitory memory, and configured to read the instructions, a computing device, a first machine learning model that’s trained, a second machine learning model that’s trained, are recited at a high-level of generality (i.e. as a generic processor performing the generic computer functions of receiving data, determining data, obtaining data, generating data, and transmitting data) such that they amount do no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Further, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use (such as computers or computing networks). Employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application. Additionally, the additional elements are insufficient to integrate the abstract idea into a practical application because the claim fails to i) reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, ii) apply the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, iii) effect a transformation or reduction of a particular article to a different state or thing, or iv) apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, the judicial exception is not integrated into a practical application. Under Step 2B, it is determined whether the claims recite additional elements that amount to significantly more than the judicial exception. The claims of the present application do not include additional elements that are sufficient to amount to significantly more than the judicial exception (NO). In the case of claim 1, taken individually or as a whole, the additional elements of claim 9 do not provide an inventive concept. As discussed above under step 2A (prong 2) with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed functions amount to no more than a general link to a technological environment. Even considered as an ordered combination (as a whole), the additional elements do not add anything significantly more than when considered individually. Claim 11 is a method reciting similar functions as claim 1. Examiner notes that claim 11 recites the additional elements of a computer-implemented method, a computing device, a first machine learning model that’s trained, a second machine learning model that’s trained, however, claim 11 does not qualify as eligible subject matter for similar reasons as claim 1 indicated above. Claim 20 is a non-transitory computer readable medium reciting similar functions as claim 1. Examiner notes that claim 20 recites the additional elements of a non-transitory computer readable medium, at least one processor, at least one device, a computing device, a first machine learning model that’s trained, a second machine learning model that’s trained, however, claim 20 does not qualify as eligible subject matter for similar reasons as claim 1 indicated above. Therefore, claims 1, 11, and 20 do not provide an inventive concept and do not qualify as eligible subject matter. Dependent claims 2-10 and 12-19, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. § 101 because they do not add “significantly more” to the abstract idea. More specifically, dependent claims 2-10 and 12-19 further fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas in that they recite commercial interactions. Dependent claims 2-3, 8, 10, 12-13, 17, and 19 do not recite any farther additional elements, and as such are not indicative of integration into a practical application for at least similar reasons discussed above. Dependent claims 2-7, 9-14, and 16-20 recite the additional elements of the at least one processor, virtual item embeddings, an embedding space, item embeddings, the first machine learning model, the second machine learning model, a same transformer architecture, the first machine learning model being trained, and the second machine learning model being trained, but similar to the analysis under prong two of Step 2A these additional elements are used as a tool to perform the abstract idea. As such, under prong two of Step 2A, claims 2-10 and 12-19 are not indicative of integration into a practical application for at least similar reasons as discussed above. Thus, dependent claims 2-10 and 12-19 are “directed to” an abstract idea. Next, under Step 2B, similar to the analysis of claims 1, 11, and 20, dependent claims 2-10 and 12-19 when analyzed individually and as an ordered combination, merely further define the commonplace business method (i.e. recommending items to a customer) being applied on a general-purpose computer and, therefore, do not amount to significantly more than the abstract idea itself. Accordingly, the Examiner concludes that there are no meaningful limitations in the claims that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself. The analysis above applies to all statutory categories of invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 10-11, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Korpeoglu et al. (US 2021/0233149 A1), hereinafter Korpeoglu, in view of Zhang et al. (US 2022/0027562 A1), hereinafter Zhang. Regarding claim 1, Korpeoglu discloses a system, comprising: -a non-transitory memory having instructions stored thereon (Korpeoglu, see at least: “the system 102 includes a routine engine 210, a processing engine 216, and an item recommendation engine 220. In one or more examples, one or more of the routine engine 210, processing engine 216, and the item recommendation engine 220 may be implemented in hardware. In one or more examples, one or more of the routine engine 210, the processing engine 216, and the item recommendation engine 220 may be implemented as an executable program maintained in a tangible, non-transitory memory, such as instruction memory 807” [0033]); and -at least one processor operatively coupled to the non-transitory memory, and configured to read the instructions (Korpeoglu, see at least: “the system 102 includes a routine engine 210, a processing engine 216, and an item recommendation engine 220. In one or more examples, one or more of the routine engine 210, processing engine 216, and the item recommendation engine 220 may be implemented in hardware. In one or more examples, one or more of the routine engine 210, the processing engine 216, and the item recommendation engine 220 may be implemented as an executable program maintained in a tangible, non-transitory memory, such as instruction memory 807” [0033]) to: -receive, from a computing device, a recommendation request for recommending items to a customer (Korpeoglu, see at least: “the system 102 may receive the anchor item 202 in response to a user adding the anchor item to the online shopping cart [i.e. receive a recommendation request for recommending items to a customer]” [0041] and “FIG. 3 is a flowchart illustrating a process 300 of providing complementary item recommendations [i.e. a recommendation request for recommending items to a customer] based on customer shopping routines” [0034] and “the device 112 may be a mobile device [i.e. from a computing device] capable of connecting to the network 106 and receiving an input to add an item to a customer's online shopping cart on the e-commerce website” [0028] and “one or more of the devices 110, 112, 114, and 118 includes a user interface for providing an end user with the capability to interact with the system 102 [i.e. from a computing device]” [0029]), -determine, based on the recommendation request, at least one anchor item to be displayed to the customer (Korpeoglu, see at least: “the system 102 may receive the anchor item 202 in response to a user adding the anchor item to the online shopping cart [i.e. based on the recommendation request]” [0041] and “the system 102 may detect that the customer added a conditioner to the online shopping cart [i.e. determine, based on the recommendation request, at least one anchor item]” [0047] and “The item recommendation engine 220 may provide the routine item recommendation 222 to, for example, an add-to-cart page 700 of the e-commerce website [i.e. at least one anchor item to be displayed to the customer]” [0046] and Fig. 7 displays an interface including the anchor item added to the cart [i.e. at least one anchor item to be displayed to the customer] as well as the recommended items), -obtain a first machine learning model trained based on a first product data granularity (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories [i.e. trained based on a first product data granularity] that have complementary behavior. A cluster algorithm may be an unsupervised machine learning approach [i.e. obtain a first machine learning model] that divides data points into a number of groups, such that data points in the same groups are more similar to other data points in the same group than those in other groups” [0038] and “Cluster 404 may be a routine related to, for example, Completing your Showering Routine Essentials [i.e. a first product data granularity], and within this routine, the cluster of categories may include, for example, Body Wash, Scalp Scrub, Deodorants and Antiperspirants, Face Wash, Personal Care Sets, Conditioners, Hair Sets, Styling and Treatments, and Shower Gels” [0039]), -obtain a second model that utilizes a second product data granularity (Korpeoglu, see at least: “the item recommendation engine 220 may apply an item recommendation model 218 [i.e. obtain a second model] to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category [i.e. [that utilizes] a second product data granularity]” [0045] Examiner notes that the second product data granularity is the product category), -generate, using the first machine learning model and the second model, a ranked list of recommended items based on the at least one anchor item (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories that have complementary behavior. A cluster algorithm may be an unsupervised machine learning approach [i.e. using the first machine learning model] that divides data points into a number of groups, such that data points in the same groups are more similar to other data points in the same group than those in other groups” [0038] and “the item recommendation engine 220 may apply an item recommendation model 218 [i.e. using the second model] to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category” [0045] and “The item recommendation engine 220 may rank the items within a relevant category in order of relevance to the anchor item 202 [i.e. generate a ranked list of recommended items based on the at least one anchor item]” [0046]), and -transmit to the computing device the ranked list of recommended items to be displayed to the customer with the at least one anchor item (Korpeoglu, see at least: “Having ranked the items within the relevant categories, the item recommendation engine 220 aggregates the ranked items and relevant categories into a routine item recommendation 222. The item recommendation engine 220 may provide the routine item recommendation 222 to, for example, an add-to-cart page 700 of the e-commerce website, as shown in FIG. 7. Accordingly, the computing device [i.e. transmit to the computing device], such as computing device 114, may display the routine item recommendation 222 on the add-to-cart page 700 to a customer [i.e. the ranked list of recommended items to be displayed to the customer with the at least one anchor item]” [0046] and Fig. 7 displays an interface including the anchor item added to the cart and the routine item recommendations [i.e. the ranked list of recommended items to be displayed to the customer with the at least one anchor item]). Korpeoglu does not explicitly disclose obtain a second machine learning model trained based on a second product data granularity; and the second model being a second machine learning model. Zhang, however, teaches utilizing machine learning to make suggestions (i.e. abstract), including the known technique obtain a second machine learning model trained based on a second product data granularity (Zhang, see at least: “The domain-specific model 310 may be used in turn to generate a global neural embedding 312 for the learning resources 306, and an unsupervised clustering algorithm or the like may be used to separate the learning resources 306 into clusters within the global neural embedding 312. The domain-specific model 310 may then be further tuned based on the resulting clusters to provide a number of cluster-specific models 314. Each of the learning resources 306 may then be fed into the domain-specific model 310 and its corresponding cluster-specific model 314 [i.e. obtain a second machine learning model], to provide two embeddings that can be concatenated to form a final embedding for a corresponding one of the learning resources 306. The resulting, final embeddings for the learning resources advantageously characterize the features of each learning resource within the entire content platform and a corresponding cluster of learning resources [i.e. trained based on a second product data granularity] within the platform” [0028] and “Different clusters correspond to different topics (e.g., different categories of products and services) [i.e. a second product data]” [0035] and “the method 400 may include creating cluster-specific models. For example, the site-wide neural language model may be separately tuned for each cluster based on one or more learning resources associated with the cluster. This results in a group of cluster-specific models including a separate model for each of the clusters” [0036]); and the known technique of generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items (Zhang, see at least: “The process using these data structures is described in greater detail below. In general, the process may be deployed on a website to provide a user experience in which a user simply highlights or hovers over text of interest, and the website can make suggestions for learning resources that may be relevant to the highlighted text” [0030] and “This lightweight analog to the recommendation model 316 may be deployed on a website, and when a user selects text within the content 304, the student model 318 may compute the distance to all of the modeled learning resources 306 within an embedded space, and recommend a list of learning resources ranked in order of increasing distance to the highlighted text (e.g., with the closest text ranked highest in the recommendations) [i.e. generate a ranked list of recommended items]” [0029] and “Different clusters correspond to different topics (e.g., different categories of products and services)” [0035] and Fig. 3 shows that the results of the domain specific model and the cluster-specific model are utilized in the recommendation model [i.e. using the first machine learning model and the second machine learning model]). These known techniques are applicable to the system of Korpeoglu as they both share characteristics and capabilities, namely, they are directed to utilizing machine learning to make suggestions. It would have been recognized that applying the known techniques of obtain a second machine learning model trained based on a second product data granularity; and generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items, as taught by Zhang, to the teachings of Korpeoglu would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such references into similar systems. Further, adding the modifications of obtain a second machine learning model trained based on a second product data granularity; and generate, using the first machine learning model and the second machine learning model, a ranked list of recommended items, as taught by Zhang, into the system of Korpeoglu would have been recognized by those of ordinary skill in the art as resulting in an improved system that would advantageously characterize the features of items within an entire content platform and a corresponding cluster of items (Zhang, [0028]). Regarding claim 2, Korpeoglu in view of Zhang teaches the system of claim 1. Korpeoglu further discloses: -the second product data granularity is at a higher level of granularity than the first product data granularity (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories that have complementary behavior” [0038] and “Cluster 404 may be a routine related to, for example, Completing your Showering Routine Essentials, and within this routine, the cluster of categories may include, for example, Body Wash, Scalp Scrub, Deodorants and Antiperspirants, Face Wash, Personal Care Sets, Conditioners, Hair Sets, Styling and Treatments, and Shower Gels” [0039] and “the item recommendation engine 220 may apply an item recommendation model 218 to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category” [0045] Examiner notes that the first product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) and the second product data granularity is a product category under the cluster [i.e. the second product data granularity is at a higher level of granularity than the first product data granularity]). Regarding claim 10, Korpeoglu in view of Zhang teaches the system of claim 1. Korpeoglu further discloses: -wherein the at least one anchor item includes at least one of: -all item(s) clicked by the customer in a same user session; -all item(s) placed in a shopping cart by the customer in a same user session (Korpeoglu, see at least: “the system 102 may receive the anchor item 202 in response to a user adding the anchor item to the online shopping cart [i.e. wherein the at least one anchor item includes at least one of: all item(s) placed in a shopping cart by the customer in a same user session]” [0041]); or -all item(s) purchased by the customer via a same order. Claims 11 and 19 recite limitations directed towards a computer-implemented method. The rest of the limitations recited in claims 11 and 19 are parallel in nature to those addressed above for claims 1 and 10, respectively, and are therefore rejected for those same reasons set forth above in claims 1 and 10, respectively. Claim 20 recites limitations directed towards a non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor (Korpeoglu, see at least: “the system 102 includes a routine engine 210, a processing engine 216, and an item recommendation engine 220. In one or more examples, one or more of the routine engine 210, processing engine 216, and the item recommendation engine 220 may be implemented in hardware. In one or more examples, one or more of the routine engine 210, the processing engine 216, and the item recommendation engine 220 may be implemented as an executable program maintained in a tangible, non-transitory memory, such as instruction memory 807” [0033]), cause at least one device to perform operations. The rest of the limitations recited in claim 20 is parallel in nature to those addressed above for claim 1, and are therefore rejected for those same reasons set forth above in claim 1. Claims 3, 9, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Korpeoglu, in view of Zhang, in further view of Vippagunta et al. (US 8,301,514 B1), hereinafter Vippagunta. Regarding claim 3, Korpeoglu in view of Zhang teaches the system of claim 2. Korpeoglu further discloses: -the first product data granularity is at a granularity level of virtual item types (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories that have complementary behavior [i.e. the first product data granularity is at a granularity level of virtual item types]” [0038] and “Cluster 404 may be a routine related to, for example, Completing your Showering Routine Essentials [i.e. the first product data granularity is at a granularity level of virtual item types], and within this routine, the cluster of categories may include, for example, Body Wash, Scalp Scrub, Deodorants and Antiperspirants, Face Wash, Personal Care Sets, Conditioners, Hair Sets, Styling and Treatments, and Shower Gels” [0039] Examiner notes that the second product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) the first product data granularity is the product category); and -the second product data granularity is at a granularity level of item types (Korpeoglu, see at least: “the item recommendation engine 220 may apply an item recommendation model 218 to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category” [0045] Examiner notes that the first product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) and the second product data granularity is the product category [i.e. the second product data granularity is at a granularity level of item types]). Korpeoglu in view of Zhang does not explicitly teach the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively. Vippagunta, however, teaches a recommendation system (i.e. abstract), including the known technique of the first product data granularity is at a granularity level of product types (Vippagunta, see at least: “Another type of item that can be recommended is an item category. For instance, if a user's purchase phrase is similar to the name of a category in the item data repository 132, the purchase phrase recommender 152 can recommend the entire category of items to the user. Recommending a category can be particularly useful if a sale or promotion is taking place with respect to the category. Thus, for example, the recommendation can be accompanied by a message such as "you might like today's promotion in the electronics category." [i.e. the first product data granularity is at a granularity level of product types]” Col. 6 Ln. 4-12); the known technique of the second product data granularity is at a granularity level of virtual item types (Vippagunta, see at least: “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key [i.e. the second product data granularity is at a granularity level of virtual item types]. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.)” Col. 7 Ln. 36-43 and “As some users may have purchased multiple items with their purchase phrases, the index can map a single purchase phrase to one or multiple purchased items” Col. 6 Ln. 49-51); the known technique of a product type includes one or more virtual item types (Vippagunta, see at least: “The electronic catalog content can include information about items, such as products and services. In one embodiment, this content is arranged in a hierarchical structure, having items associated with one or more categories or browse nodes in a hierarchy [i.e. a product type]” Col. 3 Ln. 24-28 and “a user interested in golf might choose a phrase such as "golf master" or "putting king." A user interested in gardening might pick a phrase such as "green thumb" or "rose garden." [i.e. includes one or more virtual item types]” Col. 4 Ln. 4-7 and “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" [i.e. includes one or more virtual item types] was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.) … Metadata stored with the purchase phrase word can include a venue where the purchase phrase was used, such as on the electronic catalog system 110 or on an affiliate site 190. This metadata can instead be associated with the purchased item, to indicate where the item was purchased. Other metadata that can be stored in association with a purchased item can include metadata related to item restrictions (such as movie ratings, song lyric ratings, etc.), prices of items, item categories [i.e. a product type includes one or more virtual item types]” Col. 7 Ln. 36-53 Examiner notes that a category such as gardening or golf includes multiple types of phrases such as "golf master" or "putting king" and "green thumb" or "rose garden" [i.e. a product type includes one or more virtual item types]); and the known technique of a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively (Vippagunta, see at least: “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key [i.e. a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively]. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.)” Col. 7 Ln. 36-43 and “a list of purchased items and the purchase phrases used to purchase those items is obtained. This list can be obtained from the user data repository 142 and can be in the form of a comma-delimited list or the like, for example, <purchase phrase>, <item>. The item can be identified by a product number, title, or some other identifier [i.e. one or more item identities (IDs) corresponding to one or more actual items respectively]” Col. 7 Ln. 13-18 and “As some users may have purchased multiple items with their purchase phrases, the index can map a single purchase phrase to one or multiple purchased items [i.e. a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively]” Col. 6 Ln. 49-51). These known techniques are applicable to the system of Korpeoglu in view of Zhang as they both share characteristics and capabilities, namely, they are directed to a recommendation system. It would have been recognized that applying the known techniques of the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively, as taught by Vippagunta, to the teachings of Korpeoglu in view of Zhang would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such references into similar systems. Further, adding the modifications of the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively, as taught by Vippagunta, into the system of Korpeoglu in view of Zhang would have been recognized by those of ordinary skill in the art as resulting in an improved system that would provide targeted recommendations (Vippagunta, Col. 7 Ln. 54-55). Regarding claim 9, the combination of Korpeoglu/Zhang/Vippagunta teaches the system of claim 3. Korpeoglu further discloses: -the first machine learning model is trained based on data related to product types in historical user sessions and transactions of a plurality of customers (Korpeoglu, see at least: “the routine engine 210 generates categories of items by retrieving a customer's purchase history data from the purchase history database 206. The customer's purchase history data may include metadata of items purchased by the customer. Based on the customer's purchase history data, in one or more cases, the routine engine 210 may retrieve the descriptive information metadata for the items purchased by the customer from the item database 204 [i.e. the first machine learning model is trained based on data related to product types]. In one or more other cases, the routine engine 210 may extract the descriptive information metadata from the customer's purchase history data. The descriptive information metadata of an item may include, for example, but not limited to, a title of the item, a brand of the item, descriptive phrases of the item, and the like [i.e. based on data related to product types in historical user sessions]” [0036] and “The clustering algorithm may create a cluster around a group of categories that have complementary behavior. A cluster algorithm may be an unsupervised machine learning approach [i.e. the first machine learning model is trained] that divides data points into a number of groups, such that data points in the same groups are more similar to other data points in the same group than those in other groups” [0038] and “for a given item, the category information and the department information may be stored with the item as descriptive information metadata. For example, item 1 may be shampoo with a first brand name, and the corresponding category information may indicate that item 1 corresponds to the shampoo category and the department information may indicate that item 1 corresponds to the health, beauty, and personal care department. The routine engine 210 may aggregate the purchase history data for the customer to the session level [i.e. data related to product types in historical user sessions]” [0037] and “for the cases in which the routine engine 210 generates categories based on purchase history metadata for a group of customers [i.e. based on], the identified routines may be generalized routines for that group of customers that shop on the e-commerce website [i.e. transactions of a plurality of customers]” [0040]); and -the second model is utilizes data related to of virtual item types in historical user sessions and transactions of a plurality of customers (Korpeoglu, see at least: “the item recommendation engine 220 may apply an item recommendation model 218 [i.e. the second model is utilizes] to the data of items within the relevant categories of the identified routine 212 [i.e. data related to of virtual item types]. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models. In one or more cases, the BAB model may be developed with customer historical transaction data. In one or more cases, the SFM model may be developed with customer historical transaction and customer view data [i.e. [utilizes] data related to of virtual item types in historical user sessions and transactions of a plurality of customers]. In one or more cases, a trending model may be developed with customer historical transaction data to identify trending items for a given category” [0045] Examiner notes that the categories of the identified routine are related to generated routines such as “Complete your Showering Routine Essentials” in Figs. 5A and 5B [i.e. data related to of virtual item types]) Korpeoglu does not explicitly disclose the first machine learning model and the second machine learning model are based on a same transformer architecture; and the second model being a second machine learning model that is trained. Zhang, however, teaches utilizing machine learning to make suggestions (i.e. abstract), including the known technique of the first machine learning model and the second machine learning model are based on a same transformer architecture (Zhang, see at least: “the method 400 may include creating cluster-specific models. For example, the site-wide neural language model may be separately tuned for each cluster based on one or more learning resources associated with the cluster [i.e. the first machine learning model and the second machine learning model are based on a same transformer architecture]. This results in a group of cluster-specific models including a separate model for each of the clusters” [0036] and "Because transformer-based neural language models [i.e. are based on a same transformer architecture] typically accept text of a fixed or limited length L (e.g., Bert model allows only 512 language tokens), and often the learning resources are longer than this fixed limit, each learning resource may usefully segmented into a number M of L-length token pieces, and these segments may be fed into the model, producing M neural embeddings" [0037]) known technique of the second model being a second machine learning model that is trained (Zhang, see at least: “the method 400 may include creating cluster-specific models. For example, the site-wide neural language model may be separately tuned for each cluster based on one or more learning resources associated with the cluster. This results in a group of cluster-specific models including a separate model for each of the clusters [i.e. the second model being a second machine learning model that is trained]” [0036]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Korpeoglu with Zhang for the reasons identified above with respect to claim 1. Regarding claim 12, Korpeoglu in view of Zhang teaches the method of claim 11. Korpeoglu further discloses: -the second product data granularity is at a higher level of granularity than the first product data granularity (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories that have complementary behavior” [0038] and “Cluster 404 may be a routine related to, for example, Completing your Showering Routine Essentials, and within this routine, the cluster of categories may include, for example, Body Wash, Scalp Scrub, Deodorants and Antiperspirants, Face Wash, Personal Care Sets, Conditioners, Hair Sets, Styling and Treatments, and Shower Gels” [0039] and “the item recommendation engine 220 may apply an item recommendation model 218 to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category” [0045] Examiner notes that the first product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) and the second product data granularity is a product category under the cluster [i.e. the second product data granularity is at a higher level of granularity than the first product data granularity]); -the first product data granularity is at a granularity level of virtual item types (Korpeoglu, see at least: “Routines are identified (304), preferably by the routine engine 210, based on the generated categories. In one or more cases, the routine engine 210 may identify routines by applying a clustering algorithm to the generated categories of items and the respective category embeddings. The clustering algorithm may create a cluster around a group of categories that have complementary behavior [i.e. the first product data granularity is at a granularity level of virtual item types]” [0038] and “Cluster 404 may be a routine related to, for example, Completing your Showering Routine Essentials [i.e. the first product data granularity is at a granularity level of virtual item types], and within this routine, the cluster of categories may include, for example, Body Wash, Scalp Scrub, Deodorants and Antiperspirants, Face Wash, Personal Care Sets, Conditioners, Hair Sets, Styling and Treatments, and Shower Gels” [0039] Examiner notes that the second product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) the first product data granularity is the product category); and -the second product data granularity is at a granularity level of item types (Korpeoglu, see at least: “the item recommendation engine 220 may apply an item recommendation model 218 to the data of items within the relevant categories of the identified routine 212. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models … the item recommendation model 218 may output complementary scores for the one or more items within a relevant category” [0045] Examiner notes that the first product data granularity is the cluster (e.g. , Completing your Showering Routine Essentials) and the second product data granularity is the product category [i.e. the second product data granularity is at a granularity level of item types]). Korpeoglu in view of Zhang does not explicitly teach the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively. Vippagunta, however, teaches a recommendation system (i.e. abstract), including the known technique of the first product data granularity is at a granularity level of product types (Vippagunta, see at least: “Another type of item that can be recommended is an item category. For instance, if a user's purchase phrase is similar to the name of a category in the item data repository 132, the purchase phrase recommender 152 can recommend the entire category of items to the user. Recommending a category can be particularly useful if a sale or promotion is taking place with respect to the category. Thus, for example, the recommendation can be accompanied by a message such as "you might like today's promotion in the electronics category." [i.e. the first product data granularity is at a granularity level of product types]” Col. 6 Ln. 4-12); the known technique of the second product data granularity is at a granularity level of virtual item types (Vippagunta, see at least: “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key [i.e. the second product data granularity is at a granularity level of virtual item types]. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.)” Col. 7 Ln. 36-43 and “As some users may have purchased multiple items with their purchase phrases, the index can map a single purchase phrase to one or multiple purchased items” Col. 6 Ln. 49-51); the known technique of a product type includes one or more virtual item types (Vippagunta, see at least: “The electronic catalog content can include information about items, such as products and services. In one embodiment, this content is arranged in a hierarchical structure, having items associated with one or more categories or browse nodes in a hierarchy [i.e. a product type]” Col. 3 Ln. 24-28 and “a user interested in golf might choose a phrase such as "golf master" or "putting king." A user interested in gardening might pick a phrase such as "green thumb" or "rose garden." [i.e. includes one or more virtual item types]” Col. 4 Ln. 4-7 and “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" [i.e. includes one or more virtual item types] was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.) … Metadata stored with the purchase phrase word can include a venue where the purchase phrase was used, such as on the electronic catalog system 110 or on an affiliate site 190. This metadata can instead be associated with the purchased item, to indicate where the item was purchased. Other metadata that can be stored in association with a purchased item can include metadata related to item restrictions (such as movie ratings, song lyric ratings, etc.), prices of items, item categories [i.e. a product type includes one or more virtual item types]” Col. 7 Ln. 36-53 Examiner notes that a category such as gardening or golf includes multiple types of phrases such as "golf master" or "putting king" and "green thumb" or "rose garden" [i.e. a product type includes one or more virtual item types]); and the known technique of a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively (Vippagunta, see at least: “For a given word, at block 306, the word is added to an index as a key. An associated purchased item is added as a value of the key. Thus, if a phrase "golf master" was used to purchase a golf club, the word "golf" or the word "master" can be assigned as a key and the golf club item can be the value of the key [i.e. a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively]. (As will be seen below with respect to blocks 308 and 310, both the words "golf` and "master" would be assigned as keys for the golf club item.)” Col. 7 Ln. 36-43 and “a list of purchased items and the purchase phrases used to purchase those items is obtained. This list can be obtained from the user data repository 142 and can be in the form of a comma-delimited list or the like, for example, <purchase phrase>, <item>. The item can be identified by a product number, title, or some other identifier [i.e. one or more item identities (IDs) corresponding to one or more actual items respectively]” Col. 7 Ln. 13-18 and “As some users may have purchased multiple items with their purchase phrases, the index can map a single purchase phrase to one or multiple purchased items [i.e. a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively]” Col. 6 Ln. 49-51). These known techniques are applicable to the method of Korpeoglu in view of Zhang as they both share characteristics and capabilities, namely, they are directed to a recommendation system. It would have been recognized that applying the known techniques of the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively, as taught by Vippagunta, to the teachings of Korpeoglu in view of Zhang would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such references into similar methods. Further, adding the modifications of the first product data granularity is at a granularity level of product types; the second product data granularity is at a granularity level of virtual item types; a product type includes one or more virtual item types; and a virtual item type includes one or more item identities (IDs) corresponding to one or more actual items respectively, as taught by Vippagunta, into the method of Korpeoglu in view of Zhang would have been recognized by those of ordinary skill in the art as resulting in an improved method that would provide targeted recommendations (Vippagunta, Col. 7 Ln. 54-55). Regarding claim 18, Korpeoglu in view of Zhang teaches the method of claim 12. Korpeoglu further discloses: -the first machine learning model is trained based on data related to product types in historical user sessions and transactions of a plurality of customers (Korpeoglu, see at least: “the routine engine 210 generates categories of items by retrieving a customer's purchase history data from the purchase history database 206. The customer's purchase history data may include metadata of items purchased by the customer. Based on the customer's purchase history data, in one or more cases, the routine engine 210 may retrieve the descriptive information metadata for the items purchased by the customer from the item database 204 [i.e. the first machine learning model is trained based on data related to product types]. In one or more other cases, the routine engine 210 may extract the descriptive information metadata from the customer's purchase history data. The descriptive information metadata of an item may include, for example, but not limited to, a title of the item, a brand of the item, descriptive phrases of the item, and the like [i.e. based on data related to product types in historical user sessions]” [0036] and “The clustering algorithm may create a cluster around a group of categories that have complementary behavior. A cluster algorithm may be an unsupervised machine learning approach [i.e. the first machine learning model is trained] that divides data points into a number of groups, such that data points in the same groups are more similar to other data points in the same group than those in other groups” [0038] and “for a given item, the category information and the department information may be stored with the item as descriptive information metadata. For example, item 1 may be shampoo with a first brand name, and the corresponding category information may indicate that item 1 corresponds to the shampoo category and the department information may indicate that item 1 corresponds to the health, beauty, and personal care department. The routine engine 210 may aggregate the purchase history data for the customer to the session level [i.e. data related to product types in historical user sessions]” [0037] and “for the cases in which the routine engine 210 generates categories based on purchase history metadata for a group of customers [i.e. based on], the identified routines may be generalized routines for that group of customers that shop on the e-commerce website [i.e. transactions of a plurality of customers]” [0040]); and -the second model is utilizes data related to of virtual item types in historical user sessions and transactions of a plurality of customers (Korpeoglu, see at least: “the item recommendation engine 220 may apply an item recommendation model 218 [i.e. the second model is utilizes] to the data of items within the relevant categories of the identified routine 212 [i.e. data related to of virtual item types]. The item recommendation model 218 may utilize at least one of an item-to-item co-purchase model and a category trending model, such as a bought-also-bought model, a shop for model, and a trending model. The bought-also-bought model (BAB) and shop for model (SFM) may be item level complementary recommendation models. In one or more cases, the BAB model may be developed with customer historical transaction data. In one or more cases, the SFM model may be developed with customer historical transaction and customer view data [i.e. [utilizes] data related to of virtual item types in historical user sessions and transactions of a plurality of customers]. In one or more cases, a trending model may be developed with customer historical transaction data to identify trending items for a given category” [0045] Examiner notes that the categories of the identified routine are related to generated routines such as “Complete your Showering Routine Essentials” in Figs. 5A and 5B [i.e. data related to of virtual item types]) Korpeoglu does not explicitly disclose the first machine learning model and the second machine learning model are based on a same transformer architecture; and the second model being a second machine learning model that is trained. Zhang, however, teaches utilizing machine learning to make suggestions (i.e. abstract), including the known technique of the first machine learning model and the second machine learning model are based on a same transformer architecture (Zhang, see at least: “the method 400 may include creating cluster-specific models. For example, the site-wide neural language model may be separately tuned for each cluster based on one or more learning resources associated with the cluster [i.e. the first machine learning model and the second machine learning model are based on a same transformer architecture]. This results in a group of cluster-specific models including a separate model for each of the clusters” [0036] and "Because transformer-based neural language models [i.e. are based on a same transformer architecture] typically accept text of a fixed or limited length L (e.g., Bert model allows only 512 language tokens), and often the learning resources are longer than this fixed limit, each learning resource may usefully segmented into a number M of L-length token pieces, and these segments may be fed into the model, producing M neural embeddings" [0037]) known technique of the second model being a second machine learning model that is trained (Zhang, see at least: “the method 400 may include creating cluster-specific models. For example, the site-wide neural language model may be separately tuned for each cluster based on one or more learning resources associated with the cluster. This results in a group of cluster-specific models including a separate model for each of the clusters [i.e. the second model being a second machine learning model that is trained]” [0036]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Korpeoglu with Zhang for the reasons identified above with respect to claim 11. Subject Matter Allowable Over the Prior Art Dependent Claims 4-8 and 13-17 The following is an examiner’s statement of reasons for allowance over 35 U.S.C. §103: In the present application, dependent claims 4-8 and 13-17 would be allowable if rewritten or amended to overcome the 35 U.S.C. §101 set forth in this Office action. The following is the Examiner's statement of reasons of allowance over the prior art: Regarding 35 U.S.C. §103, upon review of the evidence at hand, it is hereby concluded that the totality of the evidence, alone or in combination, neither anticipates, reasonably teaches, nor renders obvious the below noted features of the applicant’s invention. Dependent claims 4-8 and 13-17 are allowable over 35 USC § 103 as follows: The most relevant prior art made of record includes Korpeoglu et al. (US 2021/0233149 A1), Zhang et al. (US 2022/0027562 A1), Vippagunta et al. (US 8,301,514 B1), and Lin et al. (US 2025/0005279 A1). The combination of Korpeoglu/Zhang/Vippagunta teaches the system of claims 3 and the method of claim 12. As written, dependent claims 4-8 (which depend from claim 3) and claims 13-17 (which depend from claim 12) require determin[ing] a set of product types of a retailer; determin[ing] a set of item IDs of the retailer, wherein each item ID belongs to one of the set of product types; generat[ing], using a large language model, a virtual catalog of virtual item types based on the set of product types and the set of item IDs; determin[ing], using the large language model, a first mapping between the set of product types and the virtual catalog of virtual item types; and generat[ing] a second mapping between the set of item IDs and the virtual catalog of virtual item types. The combination of Korpeoglu/Zhang/Vippagunta does not teach determin[ing] a set of product types of a retailer; determin[ing] a set of item IDs of the retailer, wherein each item ID belongs to one of the set of product types; generat[ing], using a large language model, a virtual catalog of virtual item types based on the set of product types and the set of item IDs; determin[ing], using the large language model, a first mapping between the set of product types and the virtual catalog of virtual item types; and generat[ing] a second mapping between the set of item IDs and the virtual catalog of virtual item types. While Lin teaches receiving item data including item identifiers from a third party retailers, applying an unsupervised clustering algorithm to group attribute tuples based on their mutual similarities into a same cluster of a plurality of clusters and then using a LLM to refine initial clustering and group attribute tuples from each cluster that represent same attributes into sub-clusters, and normalizing a set of attribute tuples that are determined to represent the same attribute by mapping the set of attribute tuples to a common representative attribute tuple, Lin does not teach determin[ing] a set of product types of a retailer; determin[ing] a set of item IDs of the retailer, wherein each item ID belongs to one of the set of product types; generat[ing], using a large language model, a virtual catalog of virtual item types based on the set of product types and the set of item IDs; determin[ing], using the large language model, a first mapping between the set of product types and the virtual catalog of virtual item types; and generat[ing] a second mapping between the set of item IDs and the virtual catalog of virtual item types. Ultimately, the particular combination of limitations as claimed in dependent 4-8 and 13-17, is not anticipated nor rendered obvious in view of Korpeoglu, Zhang, Vippagunta, and Lin, and the totality of the prior art. While certain references may disclose more general concepts and parts of the claim, the prior art available does not specifically disclose the particular combination of these limitations. Korpeoglu, Zhang, Vippagunta, and Lin, however, do not teach or suggest, alone or in combination the claimed invention. Examiner emphasizes that the prior art/additional art would only be combined and deemed obvious based on knowledge gleaned from the applicant’s disclosure. Such a reconstruction is improper (i.e. hindsight reasoning). See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Cited NPL Gatzioura (reference U cited 03/07/2026 in PTO-892) teaches a recommender that uses a hierarchical model for the items and searches for similar sets of items, in order to recommend those that are most likely to satisfy a user, but does not teach or suggest the recited claims. The Examiner further emphasizes the claims as a whole and hereby asserts that the totality of the evidence fails to set forth, either explicitly or implicitly, an appropriate rationale for further modification of the evidence at hand to arrive at the claimed invention. The combination of features as claimed would not be obvious to one of ordinary skill in the art as combining various references from the totality of evidence to reach the combination of features as claimed would be a substantial reconstruction of Applicant’s claimed invention relying on improper hindsight bias. It is thereby asserted by Examiner that, in light of the above and further deliberation over all of the evidence at hand, that the claims are allowable as the evidence at hand does not anticipate the claims and does not render obvious any further modification of the references to a person of ordinary skill in the art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. -Archak et al. (US 2024/0394771 A1) teaches automatically generating a basket of items from different categories to be recommended to a user of an online system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARIELLE E WEINER whose telephone number is (571)272-9007. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Maria-Teresa (Marissa) Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARIELLE E WEINER/ Primary Examiner, Art Unit 3689
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586112
SYSTEMS, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUMS, AND METHODS FOR OBTAINING PRODUCT INFORMATION VIA A CONVERSATIONAL USER INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12579568
METHODS AND SYSTEMS FOR ADAPTIVE COLLABORATIVE MATCHING
2y 5m to grant Granted Mar 17, 2026
Patent 12561734
SYSTEMS, METHODS, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR RECOMMENDING 2D IMAGE
2y 5m to grant Granted Feb 24, 2026
Patent 12530713
SYSTEMS, METHODS, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUMS FOR SELECTION OF CANDIDATE CONTENT ITEMS
2y 5m to grant Granted Jan 20, 2026
Patent 12530708
KNOWLEDGE SEARCH ENGINE METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR ENHANCED BUSINESS LISTINGS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
95%
With Interview (+52.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 229 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month