Prosecution Insights
Last updated: April 19, 2026
Application No. 18/990,154

GENERATING PERSONALIZED CONTENT CAROUSELS AND ITEMS USING MACHINE-LEARNING LARGE LANGUAGE MODELS AND EMBEDDINGS

Non-Final OA §101§103
Filed
Dec 20, 2024
Examiner
ANSARI, AZAM A
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Maplebear Inc.
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
162 granted / 338 resolved
-4.1% vs TC avg
Strong +50% interview lift
Without
With
+49.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
38 currently pending
Career history
376
Total Applications
across all art units

Statute-Specific Performance

§101
34.2%
-5.8% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office action is in response to the applicant's filing of 12/20/2024. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Step 1: In a test for patent subject matter eligibility, claims 1-20 are found to be in accordance with Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. Claims 1-9 recite a method, claims 10-16 recite a non-transitory computer-readable media, and claims 17-20 recite a system. When assessed under Step 2A, Prong I, they are found to be directed towards an abstract idea. The rationale for this finding is explained below: Step 2A, Prong I: Under Step 2A, Prong I, claims 1, 10, and 17 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claims 1, 10, and 17 recite limitations directed to the abstract idea including “identifying an opportunity to display one or more content carousels for a user; generating a first prompt based on user data of the user, wherein the first prompt requests one or more carousel themes; receiving the one or more carousel themes; generating one or more second prompts based on the one or more carousel themes, wherein for each carousel theme, a respective second prompt requests a set of content names associated with the carousel theme; receiving the set of content names associated with each carousel theme; for each carousel theme, providing the carousel theme or the set of content names for the carousel theme to generate embeddings for the set of content names; for each carousel theme, identifying a set of content items associated with the carousel theme by comparing the embeddings for the set of content names with embeddings for the set of content items; and transmitting instructions to cause presentation including the set of content items for the one or more carousel themes as content carousels”. These further limitations are not seen as any more than the judicial exception. Claims 1, 10, and 17 recite additional limitations including “for/from a machine-learning model; to an embedding model; to a computing device associated with the user; and of a user interface (UI)”. The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as identifying an opportunity to display one or more content carousels for a user, generating a prompt for carousel themes, generating a prompt for content names, generating embeddings between content names and carousel themes, identifying content items based on embeddings, and transmitting content item to user as a carousel theme. The claims are also considered to be an abstract idea under Mental Processes such as concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because the claims are directed to identifying data (i.e. opportunity to display content carousels), generating data based on model (i.e. prompt for carousel themes and prompt for content names based on machine learning model), receiving data (i.e. carousel themes and content names corresponding to respective prompts), generating data based on model (i.e. embeddings between carousel themes and content names based on embedding model); identifying data based on comparing data (i.e. content items based on comparison of embeddings), and transmitting data (i.e. content items as content carousels). Therefore, under Step 2A, Prong I, claims 1, 10, and 1720 are directed towards an abstract idea. Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claims 1, 10, and 17 recite additional limitations including “for/from a machine-learning model; to an embedding model; to a computing device associated with the user; and of a user interface (UI)”. These additional limitations are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. machine learning or embedding model and UI of a computing device, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, these claims remain directed towards an abstract idea. Step 2B: Claims 1, 10, and 17 recite additional limitations including “for/from a machine-learning model; to an embedding model; to a computing device associated with the user; and of a user interface (UI)”. These additional limitations do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Claims 1, 10, and 17 do not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe a “a personal or mobile computing device, such as a smartphone, a tablet, a laptop computer, or desktop computer”, ¶ [0015], for implementing the computing device, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, these claims are not patent eligible. Dependent claims 2-9, 11-16, and 18-20 further recite the method, computer-readable medium, and system of claims 1, 10, and 17, respectively. Dependent claims 2-9, 11-16, and 18-20 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation fail to establish that the claims are not directed to an abstract idea: Under Step 2A, Prong I, these additional claims only further narrow the abstract idea set forth in claims 1, 10, and 17. For example, claims 2-9, 11-16, and 18-20 describe the limitations for identifying an opportunity to display one or more content carousels for a user, generating a prompt for carousel themes, generating a prompt for content names, generating embeddings between content names and carousel themes, identifying content items based on embeddings, and transmitting content item to user as a carousel theme – which is only further narrowing the scope of the abstract idea recited in the independent claims. Under Step 2A, Prong II, for dependent claims 2-9, 11-16, and 18-20, there are no additional elements introduced. For example, dependent claims 3, 7, 12, 16, and 18 recite additional limitations including “from a content database; for/from a machine-learning model; to a computing device associated with the user; and of a user interface (UI)”. These additional limitations are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. database, machine learning model, and UI of a computing device, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2B, the dependent claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. For example, dependent claims 2, 5, 6, 11, 14, 15, and 20 recite additional limitations including “wherein the machine-learning model is trained by: constructing a training example including a training prompt including user data for a second user and at least one carousel theme for the second user; applying parameters of the machine-learning model to the training prompt to generate estimated outputs; computing a loss function indicating a difference between the estimated outputs and the at least one carousel theme; and backpropagating one or more terms from the loss function to update the parameters”. Merely, training a machine learning model by applying conventional techniques such as computing a loss function and backpropagation (See: ¶ [0042] of U.S. Publication 2018/0308000 to Dukatz). Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 4, 7-10, 12, 13, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication 2018/0349485 to Carlisle in view of U.S. Publication 2022/0230226 to Shahrasbi. With respect to Claim 1: Carlisle teaches: A method comprising: identifying an opportunity to display one or more content carousels for a user (i.e. receiving request for new content to be displayed as a carousel or galaxy scroll view) (Carlisle: ¶ [0471] “In step 1020, user system 130 requests new content for a screen from platform 110. User system 130 may send such a request whenever new content from a source needs to be populated into a screen. The content may be requested for an entire screen, such as a module screen 330 or 338 (e.g., in response to initiation of a new app module), or for a portion of a screen, such as a content block 351 or 352 in content feed 350 ( e.g., in response to initiation of content feed 350, or the addition of a new content block 351 or 352 to an existing content feed 350).” Furthermore, as cited in ¶ [0115] “As illustrated, snapshot-results screen 312 comprises a galaxy scroll interface 314 for each of the selected categories. Each galaxy scroll interface comprises a "carousel" of search results. Initially, the top search result for the category may be visually represented in the center position of the carousel. The visual representation of the top search result may be the same as the visual representation for the link 310 which represented the category on category-snapshot screen 308.”); generating a first prompt for a machine-learning model based on user data of the user, wherein the first prompt for the machine-learning model requests one or more carousel themes (i.e. generating inputs for an artificial intelligence model based on user search terms in order to content categories in a carousel or galaxy scroll view) (Carlisle: ¶ [0109] “The categories to be included in category-snapshot screen 308 may, at least in part, be determined, by the artificial intelligence ( e.g., the predictive model described herein), based on the search terms. As an example, if the user inputs the search terms "Barcelona, Spain," the application may automatically determine that the search terms relate to a particular location (i.e., a city) which is not already the user's current location, and include categories relating to that location and travel, such as a "food" category for search results relating to restaurants in or around Barcelona, a "fly" category for search results relating to travel ( e.g., available flights, ticket purchases, etc.) to Barcelona, a "stay" category for search results relating to accommodations ( e.g., available hotel rooms, bed and breakfasts, etc.) available in or around Barcelona, and/or the like. In addition, other categories, included in category-snapshot screen 308, may comprise a "people" category for people, within the user's social network, who are currently residing or are traveling in or around Barcelona, a "newsfeed" category for search results relating to news about Barcelona, a "recommended" category for recommend search results pertaining to Barcelona, a "popular" category for popular search results pertaining to Barcelona, a "highest-rated" category for the highest-rated search results pertaining to Barcelona, a "featured" category for featured search results pertaining to Barcelona, and so on and so forth.”); receiving, from the machine-learning model, the one or more carousel themes (i.e. receiving, from the artificial intelligence model, content categories wherein the content categories are displayed via carousel or galaxy scroll view) (Carlisle: ¶ [0109] “The categories to be included in category-snapshot screen 308 may, at least in part, be determined, by the artificial intelligence ( e.g., the predictive model described herein), based on the search terms. As an example, if the user inputs the search terms "Barcelona, Spain," the application may automatically determine that the search terms relate to a particular location (i.e., a city) which is not already the user's current location, and include categories relating to that location and travel, such as a "food" category for search results relating to restaurants in or around Barcelona, a "fly" category for search results relating to travel ( e.g., available flights, ticket purchases, etc.) to Barcelona, a "stay" category for search results relating to accommodations ( e.g., available hotel rooms, bed and breakfasts, etc.) available in or around Barcelona, and/or the like. In addition, other categories, included in category-snapshot screen 308, may comprise a "people" category for people, within the user's social network, who are currently residing or are traveling in or around Barcelona, a "newsfeed" category for search results relating to news about Barcelona, a "recommended" category for recommend search results pertaining to Barcelona, a "popular" category for popular search results pertaining to Barcelona, a "highest-rated" category for the highest-rated search results pertaining to Barcelona, a "featured" category for featured search results pertaining to Barcelona, and so on and so forth.” Furthermore, as cited in ¶ [0115] “As illustrated, snapshot-results screen 312 comprises a galaxy scroll interface 314 for each of the selected categories. Each galaxy scroll interface comprises a "carousel" of search results. Initially, the top search result for the category may be visually represented in the center position of the carousel. The visual representation of the top search result may be the same as the visual representation for the link 310 which represented the category on category-snapshot screen 308.”); generating one or more second prompts for the machine-learning model based on the one or more carousel themes, wherein for each carousel theme, a respective second prompt for the machine-learning model requests a set of content names associated with the carousel theme (i.e. generating, using the artificial intelligence model and based on carousel themes, search results or content names in a carousel or galaxy scroll view) (Carlisle: ¶¶ [0115] [0116] “As illustrated, snapshot-results screen 312 comprises a galaxy scroll interface 314 for each of the selected categories. Each galaxy scroll interface comprises a "carousel" of search results. Initially, the top search result for the category may be visually represented in the center position of the carousel. The visual representation of the top search result may be the same as the visual representation for the link 310 which represented the category on category-snapshot screen 308…Whichever search result is represented in the center position of the carousel may be distinguished from the other search results in other positions in the carousel (e.g., by a highlighted border, an enlarged size, etc.), and text describing that centrally-positioned search result may be displayed above and/or below the carousel within the galaxy scroll interface 314. For example, if galaxy scroll interface 314A represents the "fly" category of a search for "Barcelona, Spain," the centrally-positioned search result may be an available airline ticket to Barcelona, and the description may comprise the airline information ( e.g., departure and arrival airports, departure and arrival times, cost of the ticket, duration of the flight, number of layovers, etc.). If galaxy scroll interface 314B represents the "snapshot" category, the centrally-positioned search result may be information about Barcelona, and the description may comprise a synopsis or portion of that information ( e.g., conveying that Barcelona is the capital city of Catalonia in Spain). If galaxy scroll interface 314C represents the "people" category, the centrally-positioned search result may be a personal profile of someone within the user's social network who lives in Barcelona, and the description may comprise information about that person ( e.g., name, position, relationship to the user, etc.).”); receiving, from the machine-learning model, the set of content names associated with each carousel theme (i.e. receiving search result names associated with content categories that are displayed via carousel or galaxy scroll view) (Carlisle: ¶ [0117] “The text used to describe the centrally-positioned search result may be biased by the artificial intelligence ( e.g., the predictive model described herein) for the user, for example, to present the information that the specific user, viewing the galaxy scroll interface 314, will find most relevant. In addition, it should be understood that the search results may be biased by the artificial intelligence, described elsewhere herein. For example, the airline information, provided in the "fly" category, may be biased towards presenting available airline tickets for a window seat (e.g., if the user's biases indicate that the user prefers a window seat), a middle seat (e.g., if the user's biases indicate that the user is an extrovert), or an aisle seat ( e.g., if the user's biases indicate that the user has an enlarged prostate or other disorder that may require frequent bathroom trips), depending on the particular user's preference.”); for each carousel theme, identifying a set of content items associated with the carousel theme by comparing the embeddings for the set of content names with embeddings for the set of content items (i.e. for each theme, identifying content items based on comparing embedded data comprising the search result or content names and content categories in carousel or galaxy scroll view) (Carlisle: ¶ [0217] “In an embodiment, content feed 350 may be a set of search results. For example, a user could input search terms ( e.g., using search input 306 or via voice input), while viewing region 348. In response to submission of the search terms, the application may input the search terms to a search engine which produces relevant (e.g., user-biased) results. The application may generate a content block 352 for each search result, and populate a content feed 350 with content blocks 352, representing all search results or a top number of search results (e.g., top five, top ten, etc.). In an embodiment, content feed 350 may be used similarly to the snapshot category described elsewhere herein, such that each content block 352 represents the top search result for a different category of content ( e.g., where the categories to be represented by a content block 352 in content feed 350 are determined in the same manner as the categories to be included in category-snapshot screen 308, or in some other manner based on user biases and/or the artificial intelligence described elsewhere herein, for example, the predictive model described herein).” Furthermore, as cited in ¶ [0408] “In an embodiment, the matching algorithm may be configured to optimize matches based on one or more criteria. For example, the matching algorithm may prioritize matches by proximity, cheapest value, congruence, and/or the like. It should be understood that the matching algorithm may rank matches according to multiple prioritizations or weightings. For example, the matching algorithm may score each match based on weightings assigned to two or more attributes of a match ( e.g., degree of separation between users, amount of money involved, equivalence between the need and offer, etc.), and select the match with the highest score as the one to be presented to the users.”); and transmitting, to a computing device associated with the user, instructions to cause presentation of a user interface (UI) including the set of content items for the one or more carousel themes as content carousels (i.e. transmitting the search results to the user wherein the search results are displayed in a carousel or galaxy scroll view) (Carlisle: ¶ [0118] “A user may navigate through the search results by "spinning" the carousel. For example, the user may spin the carousel to the right by swiping right on the carousel and spin the carousel to the left by swiping left on the carousel. When the user spins the carousel one position to the right, the visual representation of the search result, positioned immediately to the left of the center position, will move into the center position, while the visual representation of the search result, positioned in the center position, will move to the position immediately to the right of the center position. In other words, all of the visual representations of search results in the carousel will shift one position to the right. In addition, if the amount of search results in the corresponding category is greater than the amount of search results that can be displayed in the carousel at a given time, the visual representation of a new search result will appear at the left-most position in the carousel, while the visual representation of the search result in the right-most position of the carousel disappears. When the user spins the carousel one position to the left, the carousel will change in a similar manner, except that each visual representation of a search result will shift one position to the left, and a new visual representation may appear at the right-most position of the carousel, while the visual representation in the left-most position of the carousel disappears.”). Carlisle does not explicitly disclose for each carousel theme, providing, to an embedding model, the carousel theme or the set of content names for the carousel theme to generate embeddings for the set of content names. However, Shahrasbi further discloses for each carousel theme, providing, to an embedding model, the carousel theme or the set of content names for the carousel theme to generate embeddings for the set of content names (i.e. for each categorical attribute, providing to an embedding model, the categorical attribute to generate embeddings between product and content) (Shahrasbi: ¶¶ [0076] [0077] “Categorical attribute embeddings data 523 may be generated, for example, based on an embedding model that processes text, such as a category of an item. The embedding model's input may be the name of the category of the item, for example, ( e.g., text corpus), and the output may be an embedding vector…Categorical attribute embedding similarity determination engine 524 may generate item advertisement recommendation data 305 based on a similarity between categories identified by categorical attribute embeddings data 523 and categories of recommended items for items corresponding to interaction prediction data 513. For example, categorical attribute embedding similarity determination engine 524 may generate character embeddings for categories of items corresponding to interaction prediction data 513.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Shahrasbi’s for each carousel theme, providing, to an embedding model, the carousel theme or the set of content names for the carousel theme to generate embeddings for the set of content names to Carlisle’s transmitting, to a computing device associated with the user, instructions to cause presentation of a user interface (UI) including the set of content items for the one or more carousel themes as content carousels. One of ordinary skill in the art would have been motivated to do so in order for “automatically determining and providing digital item advertisements that may be displayed, for example, on a website” to “allow a person, such as a customer, to be presented with advertisements that may be more relevant to (e.g., likely to interest) the person”, “allow the person to view advertisements that the person may be more willing to purchase”, and “allow a retailer to increase item advertisement conversions ( e.g., an amount of advertised items sold)” (Shahrasbi: ¶ [0005]). With respect to Claims 10 and 17: All limitations as recited have been analyzed and rejected to claim 1. Claim 10 recites “A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising:” (Carlisle: ¶¶ [0042]-[0048]) the steps of method claim 1. Claim 17 recites “A system comprising: a processor; and a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising:” (Carlisle: ¶¶ [0042]-[0048]) the steps of method claim 1. Claims 10 and 17 do not teach or define any new limitations beyond claim 1. Therefore they are rejected under the same rationale. With respect to Claim 3: Carlisle does not explicitly disclose the method of claim 1, wherein identifying the set of content items associated with the carousel theme further comprises: generating distances between an embedding for a content name and embeddings for a subset of content items; and retrieving the subset of content items from a database responsive to identifying that the distances are within a determined threshold. However, Shahrasbi further discloses: generating distances between an embedding for a content name and embeddings for a subset of content items (i.e. generating distances between embeddings between item category and content item) (Shahrasbi: ¶ [0077] “Categorical attribute embedding similarity determination engine 524 may compute a distance between each generated recommended item character embedding and categorical attribute embeddings data 523. The distance may be based on a computed similarity (e.g., cosine similarity) between the embedding of an anchor item's category (e.g., as identified by categorical attribute embeddings data 523) and the embedding of recommendation item's category.” Furthermore, as cited in ¶ [0085] “At step 714, a distance between the item category of each of the plurality of recommended items and the categorical attribute embedding data 523 is determined.”); and retrieving the subset of content items from a database responsive to identifying that the distances are within a determined threshold (i.e. retrieving content items to be included in advertisement in response to the distance satisfying threshold) (Shahrasbi: ¶¶ [0078] [0079] “If some examples, at least one distance is below a threshold, the recommended item corresponding to interaction prediction data 513 is allowed ( e.g., is identified by item advertisement recommendation data 305). Otherwise, if no distance (for a given recommended item) is below the threshold, the recommended item corresponding to interaction prediction data 513 is not allowed ( e.g., is not identified by item advertisement recommendation data 305)…In some examples, a mean similarity of different categorical attributes (e.g., primary shelf, reporting hierarchy, product type) of the anchor item and the recommended item is computed. If the mean is above a threshold, the recommendation is allowed. Otherwise, if the mean similarity is below the threshold, the recommendation item is not allowed.” Furthermore, as cited in ¶ [0086] “At step 716, a determination is made as to whether at least one determined distance is within a threshold. If at least one computed distance is within the threshold, the method proceeds to step 718. At step 718, item advertisement recommendation data 305 is generated indicating that the recommended item is to be advertised.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Shahrasbi’s generating distances between an embedding for a content name and embeddings for a subset of content items; and retrieving the subset of content items from a database responsive to identifying that the distances are within a determined threshold transmitting, to a computing device associated with the user, instructions to cause presentation of a user interface (UI) including the set of content items for the one or more carousel themes as content carousels. One of ordinary skill in the art would have been motivated to do so in order for “automatically determining and providing digital item advertisements that may be displayed, for example, on a website” to “allow a person, such as a customer, to be presented with advertisements that may be more relevant to (e.g., likely to interest) the person”, “allow the person to view advertisements that the person may be more willing to purchase”, and “allow a retailer to increase item advertisement conversions ( e.g., an amount of advertised items sold)” (Shahrasbi: ¶ [0005]). With respect to Claims 12 and 18: All limitations as recited have been analyzed and rejected to claim 3. Claims 12 and 18 do not teach or define any new limitations beyond claim 3. Therefore they are rejected under the same rationale. With respect to Claim 4: Carlisle teaches: The method of claim 1, wherein identifying a content item in the set of content items comprises identifying: a recipe page, a coupon, an incentive, an advertisement, a sponsored page, or a sponsored item (i.e. content includes recipe page, promotions, advertisements, and sponsored content) (Carlisle: ¶ [0085] “The rewards-themed screen may track and display a multitude of different rewards, received by the user, in all areas of growth and contribution. The rewards-themed screen may comprise links to information about ratings, recognition ( e.g., the recognition-themed screen), reward tokens and/or tiers, access, and/or the like. Ratings, recognition, and rewards may be achieved by the user via his or her activity, interactions, referrals, contributions, creative contributions, service, and/or the like, within the application. A user's overall rating and/or rewards may be determined by the user's growth in awareness, completion of specific content or activities, activities, interactions recommendations, referrals, offers, donations, time spent with the application, contribution to others, contributions to companies, contributions to charities, contributions to the environment and/or the planet, and/or the like. Access refers to a user's access, promotion to, or introductions to various specific types of (e.g., levels) of people, businesses, resources, exposure, promotion, recommendations, gifts, financing, support, donations, advertising, cost-savings, functions, and/or the like within the application (e.g., based on the user's current reward tier).” Furthermore, as cited in ¶ [0233] “In an embodiment, a user can access various content-feed-related searches. For example, a user may search the user's specified areas of interest and feed the search results into the user's content feed (e.g., by content source, category of content, etc.). In this manner, the user could search a personal newsfeed, preselected areas of interest, preferred sources of information, past notifications, past broadcasts, trending news (e.g., by area of interest, friend, family, celebrity, company, charity, or other organization, team, community, or other group, general, etc.), advertising (e.g., by area of interest or specified by the user, company, prior broadcasts, prior search queries, prior notifications, etc.), recommended sources of information, most popular sources of information, and/or the like.” Furthermore, as cited in ¶ [0344] “These biases may include the user's preferences, interests, activities, history, and/or any other user-specific information acquired from his or her use of the application ( e.g., as defined in the user's descriptive user data model 405). Using the example above, if the search terms include the term "screwdriver," and this term was determined in step 515 to refer to the cocktail, the search context may include the user's current or future location (e.g., relevant to where the user may obtain a screwdriver cocktail), the user's favorite restaurants or bars at which a screwdriver cocktail may be purchased and within a vicinity of the user's location, people within the user's social network within a vicinity of the user's location (e.g., relevant to possible drinking companions), taxi or ride-sharing services that service the user's vicinity (e.g., including a peer-to-peer ride-sharing service from another user that may be booked via process 740 described with respect to FIGS. 7B-7C), hangover cures (e.g., pharmacies or locations at which Advil™ can be purchased, food recipes, gym locations, etc.), screwdriver cocktail recipes, and/or the like.”). With respect to Claims 13 and 19: All limitations as recited have been analyzed and rejected to claim 4. Claims 13 and 19 do not teach or define any new limitations beyond claim 4. Therefore they are rejected under the same rationale. With respect to Claim 7: Carlisle teaches: The method of claim 1, further comprising: receiving, from the user, a selection of a content item from a content database (i.e. receiving user feedback in response to content item) (Carlisle: ¶ [0338] “In an embodiment, the artificial intelligence may be trained for a particular user, at least in part, by that user's responses to prompts. For example, after the artificial intelligence injects what it perceives to be user biases into a particular function (e.g., search, advertising, etc.), the application may prompt the user ( e.g., via a pop-up overlay in the graphical user interface, via feedback inputs associated with each search result, advertisement, etc.) to indicate whether or not the functional result ( e.g., search results, advertisement, etc.) of the user-biased function was useful ( e.g., via one or more inputs within the pop-up overlay). If the user responds by indicating that the functional result was not useful, the application may adjust its artificial-intelligence algorithm for the user to eliminate a same or similar functional result in the future. On the other hand, if the user responds by indicating that the functional result was useful, the application may not adjust its artificial-intelligence algorithm for the user or may adjust its artificial-intelligence algorithm to reinforce a same or similar functional result in the future. The application may prompt the user for such feedback for as long as the user utilizes the application or for a set period of time. In addition, the application may only prompt the user for such feedback when it is likely to improve the artificial intelligence for the user.”); generating a set of filters for the user based on a set of user preferences associated with the user (i.e. generating filters based on user’s preference or bias with respect to content) (Carlisle: ¶ [0260] “In an embodiment, the graphical user interface may comprise one or more screens by which a user can view, filter, and/or search his or her social network, including the user's personal network (e.g., friends and family) and business network (e.g., coworkers). The screen may list a description for each contact, including, for example, the contact's thumbnail image or avatar, name, profession, employer, and/or the like, along with a link for contacting the contact ( e.g., using a messaging function provided by the application).” Furthermore, as cited in ¶ [0347] “In step 535, the artificial intelligence of the application may filter the best-fit, user-biased search results. While any type of filtering may be used, in an embodiment, the search results are filtered based on sentiment. Specifically, search results that reflect a negative sentiment may be eliminated, such that only search results with a neutral and/or positive sentiment are ever presented to the user. Thus, all negative content can be filtered out, and never shown to the user, such that the user only reviews positive content in his or her search results and/or content feeds.”); receiving, from the user, a selection of a filter from the set of filters (i.e. user provides preference data or selects filters) (Carlisle: ¶ [0260] “In an embodiment, the graphical user interface may comprise one or more screens by which a user can view, filter, and/or search his or her social network, including the user's personal network (e.g., friends and family) and business network (e.g., coworkers). The screen may list a description for each contact, including, for example, the contact's thumbnail image or avatar, name, profession, employer, and/or the like, along with a link for contacting the contact ( e.g., using a messaging function provided by the application).” Furthermore, as cited in ¶ [0347] “In step 535, the artificial intelligence of the application may filter the best-fit, user-biased search results. While any type of filtering may be used, in an embodiment, the search results are filtered based on sentiment. Specifically, search results that reflect a negative sentiment may be eliminated, such that only search results with a neutral and/or positive sentiment are ever presented to the user. Thus, all negative content can be filtered out, and never shown to the user, such that the user only reviews positive content in his or her search results and/or content feeds.”); generating a third prompt for the machine-learning model based on the selected content item and the selected filter, wherein the third prompt for the machine-learning model requests an adaptation to the selected content item based on the selected filter (i.e. prompt for artificial intelligence model is based on selected filters/preferences which is used by the artificial intelligence model to adapt or filter out certain content) (Carlisle: ¶ [0345] “In step 525, the artificial intelligence of the application performs a simultaneous or contemporaneous search across all sources available to the application, using the user-biased search context determined in step 520. The application may train a predictive model for each user, using a user-biased training set, such that the search results for a particular user will be different than the search results for a different user. To generate the predictive model, the application may utilize feature selection to automatically identify and select features (i.e., data) that are most relevant to the user-biased search results. The application may then perform a gradient descent to solve a linear regression on the selected features, to generate an algorithm representing the predictive model for the particular user. This predictive model represents the artificial intelligence used for identifying user-biased search results for the user. The predictive model may be continually trained, as the user's biases change (e.g., as the user continues to interact with the application), so that it evolves with the user. The output of step 525 is a set of user-biased search results, produced by the artificial intelligence (e.g., predictive model for the particular user), from all available sources.” Furthermore, as cited in ¶ [0347] “In step 535, the artificial intelligence of the application may filter the best-fit, user-biased search results. While any type of filtering may be used, in an embodiment, the search results are filtered based on sentiment. Specifically, search results that reflect a negative sentiment may be eliminated, such that only search results with a neutral and/or positive sentiment are ever presented to the user. Thus, all negative content can be filtered out, and never shown to the user, such that the user only reviews positive content in his or her search results and/or content feeds.”); receiving, from the machine-learning model, an adapted content item based on the selected content item and the selected filter (i.e. determining content adapted according to user-selected preferences/filters) (Carlisle: ¶ [0348] “In this case, any search result with a negative sentiment score ( e.g., having a score of less than 0.0) may be filtered out, whereas any search result with a positive sentiment score ( e.g., having a score greater than 0.0 or some other positive threshold), and optionally a neural sentiment score (e.g., having a score of0.0 or within a range of 0.0 to some other positive threshold), may survive the filter. As an example, an external system 140 (e.g., Google Cloud Natural Language™), may be used for determining the sentiment scores of search results. In this case, the artificial intelligence may utilize an API of the external system 140 to pass the content of a search result to the external system 140, and receive the sentiment score for the search result from the external system 140. In any case, the output of step 535 and the artificial intelligence is a set of filtered, best-fit, user-biased search results.”); and transmitting, to a computing device associated with the user, instructions to cause presentation of a user interface (UI) including the adapted content item based on the selected content item and the selected filter (i.e. transmitting the adapted content to the user based on the user-selected preference/filters) (Carlisle: ¶ [0349] “In step 540, the application provides the filtered, best-fit, user-biased search results to the user, using the graphical user interface described herein. For example, these search results may be presented to the user in a multi-screen view 328 and/or multi-modal view 336, with the top-most search result provided in the initial active module screen, and other search results provided in inactive module screens that logically extend outward from the initial active module screen, with higher-ranked search results logically and navigably closer to the initial active module screen than lower-ranked search results.”). With respect to Claim 16: All limitations as recited have been analyzed and rejected to claim 7. Claim 16 does not teach or define any new limitations beyond claim 7. Therefore it is rejected under the same rationale. With respect to Claim 8: Carlisle teaches: The method of claim 7, further comprising: obtaining a set of user preferences for the user, wherein the set of user preferences are one or a combination of cuisines, diets, or attributes for the user, wherein the set of filters are generated from the set of user preferences (i.e. receiving user preferences, wherein the preference include user attributes such as recipe, or preferred food categories such as cocktails) (Carlisle: ¶ [0075] “Thus, in an embodiment, the set of links 304 included in a given home screen 302 for a given user may change over time ( e.g., in real time) as the artificial intelligence evolves based on the user's biases. As used herein, the term "biases" refers to a user's preferences, interests, activities, interactions, transactions, and/or the like, as determined, for example, from the user's data model, described elsewhere herein.” Furthermore, as cited in ¶ [0102] “Preferred Sources: comprises search results from sources that the user and/or other users prefer (e.g., sources most frequently and/or recently used by the user). The user may specify (e.g., via a profile or settings screen of the graphical user interface) his or her preferred sources (e.g., app modules, websites, brands, retailers, providers, and/or the like).” Furthermore, as cited in ¶ [0344] “Using the example above, if the search terms include the term "screwdriver," and this term was determined in step 515 to refer to the cocktail, the search context may include the user's current or future location (e.g., relevant to where the user may obtain a screwdriver cocktail), the user's favorite restaurants or bars at which a screwdriver cocktail may be purchased and within a vicinity of the user's location, people within the user's social network within a vicinity of the user's location (e.g., relevant to possible drinking companions), taxi or ride-sharing services that service the user's vicinity (e.g., including a peer-to-peer ride-sharing service from another user that may be booked via process 740 described with respect to FIGS. 7B-7C), hangover cures (e.g., pharmacies or locations at which Advil™ can be purchased, food recipes, gym locations, etc.), screwdriver cocktail recipes, and/or the like.”). With respect to Claim 9: Carlisle teaches: The method of claim 7, further comprising: storing the set of user preferences in a user preference table associated with the user, wherein the user preference table includes a description of a reason for the user preferences for the user (i.e. user preferences are stored via data model, wherein the preference includes a reason for the user preferred source) (Carlisle: ¶ [0075] “Thus, in an embodiment, the set of links 304 included in a given home screen 302 for a given user may change over time ( e.g., in real time) as the artificial intelligence evolves based on the user's biases. As used herein, the term "biases" refers to a user's preferences, interests, activities, interactions, transactions, and/or the like, as determined, for example, from the user's data model, described elsewhere herein.” Furthermore, as cited in ¶ [0102] “Preferred Sources: comprises search results from sources that the user and/or other users prefer (e.g., sources most frequently and/or recently used by the user). The user may specify (e.g., via a profile or settings screen of the graphical user interface) his or her preferred sources (e.g., app modules, websites, brands, retailers, providers, and/or the like).”). Claim(s) 2, 5, 6, 11, 14, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Carlisle and Shahrasbi in view of U.S. Publication 2024/0202795 to Kharbanda. With respect to Claim 2: Carlisle teaches: The method of claim 1, wherein the machine-learning model is trained by: constructing a training example including a training prompt including user data for a second user and at least one carousel theme for the second user (i.e. training prompt includes user data for a different user, wherein the content item or carousel them is different for the second user) (Carlisle: ¶ [0345] “In step 525, the artificial intelligence of the application performs a simultaneous or contemporaneous search across all sources available to the application, using the user-biased search context determined in step 520. The application may train a predictive model for each user, using a user-biased training set, such that the search results for a particular user will be different than the search results for a different user. To generate the predictive model, the application may utilize feature selection to automatically identify and select features (i.e., data) that are most relevant to the user-biased search results. The application may then perform a gradient descent to solve a linear regression on the selected features, to generate an algorithm representing the predictive model for the particular user. This predictive model represents the artificial intelligence used for identifying user-biased search results for the user. The predictive model may be continually trained, as the user's biases change (e.g., as the user continues to interact with the application), so that it evolves with the user. The output of step 525 is a set of user-biased search results, produced by the artificial intelligence (e.g., predictive model for the particular user), from all available sources.”); and applying parameters of the machine-learning model to the training prompt to generate estimated outputs (i.e. applying feature selection to machine learning model to generate predictive outputs) (Carlisle: ¶ [0345] “In step 525, the artificial intelligence of the application performs a simultaneous or contemporaneous search across all sources available to the application, using the user-biased search context determined in step 520. The application may train a predictive model for each user, using a user-biased training set, such that the search results for a particular user will be different than the search results for a different user. To generate the predictive model, the application may utilize feature selection to automatically identify and select features (i.e., data) that are most relevant to the user-biased search results. The application may then perform a gradient descent to solve a linear regression on the selected features, to generate an algorithm representing the predictive model for the particular user. This predictive model represents the artificial intelligence used for identifying user-biased search results for the user. The predictive model may be continually trained, as the user's biases change (e.g., as the user continues to interact with the application), so that it evolves with the user. The output of step 525 is a set of user-biased search results, produced by the artificial intelligence (e.g., predictive model for the particular user), from all available sources.”). Carlisle and Shahrasbi do not explicitly disclose computing a loss function indicating a difference between the estimated outputs and the at least one carousel theme; and backpropagating one or more terms from the loss function to update the parameters. However, Kharbanda further discloses: computing a loss function indicating a difference between the estimated outputs and the at least one carousel theme (i.e. computing loss function including likelihood loss indicating difference between estimated or likelihood outputs) (Kharbanda: ¶ [0197] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.”); and backpropagating one or more terms from the loss function to update the parameters (i.e. updating parameters of the model to backpropagate terms from the loss function) (Kharbanda: ¶¶ [0197] [0198] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations…In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The model trainer 160 an perform a number of generalization techniques ( e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Kharbanda’s computing a loss function indicating a difference between the estimated outputs and the at least one carousel theme; and backpropagating one or more terms from the loss function to update the parameters to Carlisle’s constructing a training example including a training prompt including user data for a second user and at least one carousel theme for the second user; and applying parameters of the machine-learning model to the training prompt to generate estimated outputs. One of ordinary skill in the art would have been motivated to do so in order to “provide a more detailed context of what a user is requesting during the search, which can allow for a more tailored search than text alone.” (Kharbanda: ¶ [0043]). With respect to Claim 11: All limitations as recited have been analyzed and rejected to claim 2. Claim 11 does not teach or define any new limitations beyond claim 2. Therefore it is rejected under the same rationale. With respect to Claim 5: Carlisle teaches: The method of claim 1, further comprising: obtaining feedback from the user after presentation of the content carousels, the feedback indicating whether the user interacted with the set of content items (i.e. receiving feedback data whether user had a positive/negative response to content item) (Carlisle: ¶ [0338] “In an embodiment, the artificial intelligence may be trained for a particular user, at least in part, by that user's responses to prompts. For example, after the artificial intelligence injects what it perceives to be user biases into a particular function (e.g., search, advertising, etc.), the application may prompt the user ( e.g., via a pop-up overlay in the graphical user interface, via feedback inputs associated with each search result, advertisement, etc.) to indicate whether or not the functional result ( e.g., search results, advertisement, etc.) of the user-biased function was useful ( e.g., via one or more inputs within the pop-up overlay). If the user responds by indicating that the functional result was not useful, the application may adjust its artificial-intelligence algorithm for the user to eliminate a same or similar functional result in the future. On the other hand, if the user responds by indicating that the functional result was useful, the application may not adjust its artificial-intelligence algorithm for the user or may adjust its artificial-intelligence algorithm to reinforce a same or similar functional result in the future. The application may prompt the user for such feedback for as long as the user utilizes the application or for a set period of time. In addition, the application may only prompt the user for such feedback when it is likely to improve the artificial intelligence for the user. Thus, over time, the prompts may become less frequent, as the artificial intelligence becomes more and more synchronized with the user's biases, such that prompts become less and less likely to further improve the artificial intelligence.”); constructing a training example including the first prompt and the one or more carousel themes (i.e. training machine learning model using prompts of feedback data associated with content item/carousel theme) (Carlisle: ¶ [0338] “In an embodiment, the artificial intelligence may be trained for a particular user, at least in part, by that user's responses to prompts. For example, after the artificial intelligence injects what it perceives to be user biases into a particular function (e.g., search, advertising, etc.), the application may prompt the user ( e.g., via a pop-up overlay in the graphical user interface, via feedback inputs associated with each search result, advertisement, etc.) to indicate whether or not the functional result ( e.g., search results, advertisement, etc.) of the user-biased function was useful ( e.g., via one or more inputs within the pop-up overlay). If the user responds by indicating that the functional result was not useful, the application may adjust its artificial-intelligence algorithm for the user to eliminate a same or similar functional result in the future. On the other hand, if the user responds by indicating that the functional result was useful, the application may not adjust its artificial-intelligence algorithm for the user or may adjust its artificial-intelligence algorithm to reinforce a same or similar functional result in the future. The application may prompt the user for such feedback for as long as the user utilizes the application or for a set period of time. In addition, the application may only prompt the user for such feedback when it is likely to improve the artificial intelligence for the user. Thus, over time, the prompts may become less frequent, as the artificial intelligence becomes more and more synchronized with the user's biases, such that prompts become less and less likely to further improve the artificial intelligence.”); and applying parameters of the machine-learning model to the first prompt to generate estimated outputs (i.e. applying feature selection to machine learning model to generate predictive outputs) (Carlisle: ¶ [0345] “In step 525, the artificial intelligence of the application performs a simultaneous or contemporaneous search across all sources available to the application, using the user-biased search context determined in step 520. The application may train a predictive model for each user, using a user-biased training set, such that the search results for a particular user will be different than the search results for a different user. To generate the predictive model, the application may utilize feature selection to automatically identify and select features (i.e., data) that are most relevant to the user-biased search results. The application may then perform a gradient descent to solve a linear regression on the selected features, to generate an algorithm representing the predictive model for the particular user. This predictive model represents the artificial intelligence used for identifying user-biased search results for the user. The predictive model may be continually trained, as the user's biases change (e.g., as the user continues to interact with the application), so that it evolves with the user. The output of step 525 is a set of user-biased search results, produced by the artificial intelligence (e.g., predictive model for the particular user), from all available sources.”). Carlisle and Shahrasbi do not explicitly disclose computing a loss function indicating a difference between the estimated outputs and the one or more carousel themes; and updating the parameters of the machine-learning model to backpropagate terms from the loss function. However, Kharbanda further discloses: computing a loss function indicating a difference between the estimated outputs and the one or more carousel themes (i.e. computing loss function including likelihood loss indicating difference between estimated or likelihood outputs) (Kharbanda: ¶ [0197] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.”); and updating the parameters of the machine-learning model to backpropagate terms from the loss function (i.e. updating parameters of the model to backpropagate terms from the loss function) (Kharbanda: ¶¶ [0197] [0198] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations…In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The model trainer 160 an perform a number of generalization techniques ( e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Kharbanda’s computing a loss function indicating a difference between the estimated outputs and the one or more carousel themes; and updating the parameters of the machine-learning model to backpropagate terms from the loss function to Carlisle’s obtaining feedback from the user after presentation of the content carousels, the feedback indicating whether the user interacted with the set of content items; constructing a training example including the first prompt and the one or more carousel themes; and applying parameters of the machine-learning model to the first prompt to generate estimated outputs. One of ordinary skill in the art would have been motivated to do so in order to “provide a more detailed context of what a user is requesting during the search, which can allow for a more tailored search than text alone.” (Kharbanda: ¶ [0043]). With respect to Claims 14 and 20: All limitations as recited have been analyzed and rejected to claim 5. Claims 14 and 20 do not teach or define any new limitations beyond claim 5. Therefore they are rejected under the same rationale. With respect to Claim 6: Carlisle teaches: The method of claim 1, further comprising: obtaining feedback from the user after presentation of the content carousels, the feedback indicating whether the user interacted with the set of content items (i.e. receiving feedback data whether user had a positive/negative response to content item) (Carlisle: ¶ [0338] “In an embodiment, the artificial intelligence may be trained for a particular user, at least in part, by that user's responses to prompts. For example, after the artificial intelligence injects what it perceives to be user biases into a particular function (e.g., search, advertising, etc.), the application may prompt the user ( e.g., via a pop-up overlay in the graphical user interface, via feedback inputs associated with each search result, advertisement, etc.) to indicate whether or not the functional result ( e.g., search results, advertisement, etc.) of the user-biased function was useful ( e.g., via one or more inputs within the pop-up overlay). If the user responds by indicating that the functional result was not useful, the application may adjust its artificial-intelligence algorithm for the user to eliminate a same or similar functional result in the future. On the other hand, if the user responds by indicating that the functional result was useful, the application may not adjust its artificial-intelligence algorithm for the user or may adjust its artificial-intelligence algorithm to reinforce a same or similar functional result in the future. The application may prompt the user for such feedback for as long as the user utilizes the application or for a set period of time. In addition, the application may only prompt the user for such feedback when it is likely to improve the artificial intelligence for the user. Thus, over time, the prompts may become less frequent, as the artificial intelligence becomes more and more synchronized with the user's biases, such that prompts become less and less likely to further improve the artificial intelligence.”); constructing a training example including the one or more second prompts and the set of content names for at least one carousel theme (i.e. training machine learning model using prompts of feedback data associated with content item/carousel theme) (Carlisle: ¶ [0338] “In an embodiment, the artificial intelligence may be trained for a particular user, at least in part, by that user's responses to prompts. For example, after the artificial intelligence injects what it perceives to be user biases into a particular function (e.g., search, advertising, etc.), the application may prompt the user ( e.g., via a pop-up overlay in the graphical user interface, via feedback inputs associated with each search result, advertisement, etc.) to indicate whether or not the functional result ( e.g., search results, advertisement, etc.) of the user-biased function was useful ( e.g., via one or more inputs within the pop-up overlay). If the user responds by indicating that the functional result was not useful, the application may adjust its artificial-intelligence algorithm for the user to eliminate a same or similar functional result in the future. On the other hand, if the user responds by indicating that the functional result was useful, the application may not adjust its artificial-intelligence algorithm for the user or may adjust its artificial-intelligence algorithm to reinforce a same or similar functional result in the future. The application may prompt the user for such feedback for as long as the user utilizes the application or for a set period of time. In addition, the application may only prompt the user for such feedback when it is likely to improve the artificial intelligence for the user. Thus, over time, the prompts may become less frequent, as the artificial intelligence becomes more and more synchronized with the user's biases, such that prompts become less and less likely to further improve the artificial intelligence.”); and applying parameters of the machine-learning model to the one or more second prompts to generate estimated outputs (i.e. applying feature selection to machine learning model to generate predictive outputs) (Carlisle: ¶ [0345] “In step 525, the artificial intelligence of the application performs a simultaneous or contemporaneous search across all sources available to the application, using the user-biased search context determined in step 520. The application may train a predictive model for each user, using a user-biased training set, such that the search results for a particular user will be different than the search results for a different user. To generate the predictive model, the application may utilize feature selection to automatically identify and select features (i.e., data) that are most relevant to the user-biased search results. The application may then perform a gradient descent to solve a linear regression on the selected features, to generate an algorithm representing the predictive model for the particular user. This predictive model represents the artificial intelligence used for identifying user-biased search results for the user. The predictive model may be continually trained, as the user's biases change (e.g., as the user continues to interact with the application), so that it evolves with the user. The output of step 525 is a set of user-biased search results, produced by the artificial intelligence (e.g., predictive model for the particular user), from all available sources.”). Carlisle and Shahrasbi do not explicitly disclose computing a loss function indicating a difference between the estimated outputs and the set of content names for the at least one carousel theme; and updating the parameters of the machine-learning model to backpropagate terms from the loss function. However, Kharbanda further discloses: computing a loss function indicating a difference between the estimated outputs and the set of content names for the at least one carousel theme (i.e. computing loss function including likelihood loss indicating difference between estimated or likelihood outputs) (Kharbanda: ¶ [0197] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.”); and updating the parameters of the machine-learning model to backpropagate terms from the loss function (i.e. updating parameters of the model to backpropagate terms from the loss function) (Kharbanda: ¶¶ [0197] [0198] “The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations…In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The model trainer 160 an perform a number of generalization techniques ( e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Kharbanda’s computing a loss function indicating a difference between the estimated outputs and the set of content names for the at least one carousel theme; and updating the parameters of the machine-learning model to backpropagate terms from the loss function to Carlisle’s obtaining feedback from the user after presentation of the content carousels, the feedback indicating whether the user interacted with the set of content items; constructing a training example including the one or more second prompts and the set of content names for at least one carousel theme; and applying parameters of the machine-learning model to the one or more second prompts to generate estimated outputs. One of ordinary skill in the art would have been motivated to do so in order to “provide a more detailed context of what a user is requesting during the search, which can allow for a more tailored search than text alone.” (Kharbanda: ¶ [0043]). With respect to Claim 15: All limitations as recited have been analyzed and rejected to claim 6. Claim 15 does not teach or define any new limitations beyond claim 6. Therefore it is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited to further show the state of the art: U.S. Publication 2024/0020345 to Ulanov for disclosing A system uses semantic analysis of text associated with content items to recommend content for display to a user. A subset of representative words from a content description are determined and a content embedding that models the content is generated using a combination of word embeddings associated with each of the representative words. User embeddings are generated using a combination of content embeddings for content that a user has had particular interactions with in a set period of time. Separate user embeddings may be generated to represent user interactions with different categories of content (e.g., travel, photography, apparel, comedy, etc.). The system uses the content embeddings and user embeddings as input to predictive functions which determine a candidate content item that a user is likely to interact with if the candidate content is displayed to the user. U.S. Publication 2025/0285139 to Morgan for disclosing systems and methods provide a system for dynamically serving personalized content for presentation across computing devices of different guests in an online retail environment. A server system can: retrieve, from a data repository, a subset of predefined content stories having a predetermined activation status and guest purchase data for at least a portion of guests in the online retail environment, the guest purchase data indicating products each guest purchased, determine product embeddings for each story in the subset, each embedding indicating a product linked to the story, determine, based on applying a guest story similarity (GSS) model to the guest purchase data for each guest, a similarity score between the guest purchase data and the embeddings that indicates a likelihood the guest would purchase products embedded in the story, and serve a content story having a similarity score that satisfies one or more guest personalization criteria. U.S. Publication 2021/0365510 to Morin for disclosing An online system receives a content item including an image from a content-providing user and/or receives an interaction with the content item from a viewing user. The online system accesses a machine-learning model that is trained based on a set of images of items associated with an entity and attributes of each image. The online system applies the model to predict a probability that the content item includes an image of an item associated with the entity based on attributes of the image included in the content item. Based on the predicted probability, the online system updates a profile of the user (i.e., the content-providing user and/or the viewing user) to include an affinity for the item. Upon determining an opportunity to present content to the user, the online system selects content for presentation to the user based on the profile and sends the content for presentation to the user. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Azam Ansari, whose telephone number is (571) 272-7047. The examiner can normally be reached from Monday to Friday between 8 AM and 4:30 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner's supervisor, Waseem Ashraf, can be reached at (571) 270-3948. Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule either an in-person or a telephonic interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. Sincerely, /AZAM A ANSARI/ Primary Examiner, Art Unit 3621 December 30, 2025
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §103
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591892
SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EARLY DETECTION OF A MERCHANT DATA BREACH THROUGH MACHINE-LEARNING ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12499471
AUTOMATICALLY GENERATING A RETAILER-SPECIFIC BRAND PAGE BASED ON A MACHINE LEARNING PREDICTION OF ITEM AVAILABILITY
2y 5m to grant Granted Dec 16, 2025
Patent 12469042
SYSTEM FOR GENERATING A NON-FUNGIBLE TOKEN INCLUDING MUTABLE AND IMMUTABLE ATTRIBUTES AND RELATED METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12423918
AUGMENTED REALITY IN-APPLICATION ADVERTISEMENTS
2y 5m to grant Granted Sep 23, 2025
Patent 12417468
USER ENGAGEMENT MODELING FOR ENGAGEMENT OPTIMIZATION
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
98%
With Interview (+49.7%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month