Prosecution Insights
Last updated: April 19, 2026
Application No. 18/501,294

CONTEXTUAL RECOMMENDATIONS FOR DIGITAL CONTENT

Non-Final OA §103
Filed
Nov 03, 2023
Examiner
ULRICH, NICHOLAS S
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
77%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
425 granted / 614 resolved
+14.2% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The IDS filed 7/24/2024 and 8/18/2025 are considered except for the lined through references. The information disclosure statement filed 8/18/2025 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but the information referred to therein has not been considered. NPL reference #42 and #43 are in a foreign language and there is no concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim(s) 1, 2, and 4-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1) and further in view of Chaturvedi (US 11126845 B1). In regard to claim 1, Thompson discloses a computer-implemented method, performed by at least one processor, the method comprising: receiving an input, associated with a virtual environment, at a client device (Column 11 lines 32-40: search or text input element the receives text input); generating a recommendation for one or more digital content items aligned with the input (Column 11 lines 32-40 and Column 46 lines 1-8: the input text is used to search a catalog of objects where objects are identified based on the text input); and displaying, at the client device, the recommendation of the one or more digital content items (Column 46 lines 1-8: the identified objects are displayed in a user interface). While Thompson teaches generating a recommendation for one or more digital content items aligned with the input and further teaches filtering the recommended digital content items (Column 2 lines 42-46 and Column 17 lines 44-58) and recommended digital objects can be associated with one or more objects currently presented within the virtual environment (Column 46 lines 9-10), they fail to show the generating image embeddings based on an image representation of the virtual environment; analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determining a context of the virtual environment based on the one or more digital content; and generating a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment, as recited in the claims. Chaturvedi teaches a virtual environment (Chaturvedi Column 26 lines 59-61) similar to that of Thompson. In addition, Chaturvedi further teaches generating image embeddings based on an image representation of a virtual environment (Column 9 lines 1-11, Column 12 line 65 – Column 13 line 2, and Column 26 lines 59-Column 27 line 7: objects in image identified by coordinates); analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings (Column 9 lines 1-11, Column 13 lines 3-21, and Column 26 lines 59-Column 27 line 7: objects corresponding to the coordinates are identified, such as a couch); determining a context of the virtual environment based on the one or more digital content (Column 5 lines 4-21, Column 9 lines 1-11, Column 13 lines 3-21, and Column 26 lines 59-Column 27 line 7: the identified objects are used to determine a room type); and generating a recommendation for one or more digital content items aligned with the context of the virtual environment (Column 2 lines 49-57, Column 3 lines 27-55, and Column 26 lines 59-Column 27 line 7: items that are suitable for the determined room type are provided). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson and Chaturvedi before him before the effective filing date of the claimed invention, to modify the generating a recommendation for one or more digital content items aligned with the input, filtering the recommended digital content items, and recommended digital objects can be associated with one or more objects currently presented within the virtual environment taught by Thompson to include the generating image embeddings based on an image representation of the virtual environment; analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determining a context of the virtual environment based on the one or more digital content; and generating a recommendation for one or more digital content items aligned with the context of the virtual environment of Chaturvedi, in order to obtain generating image embeddings based on an image representation of the virtual environment; analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determining a context of the virtual environment based on the one or more digital content; and generating a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 2, Thompson discloses wherein the input is a text query describing a desired digital content item to be added to the virtual environment (Column 11 lines 32-40: search or text input element the receives text input). In regard to claim 4, Chaturvedi further discloses further comprising identifying contextual information of the one or more digital content included in a virtual environment, wherein the context of the virtual environment is determined based on the contextual information (Column 3 lines 26-40, Column 5 lines 4-21, and Column 26 lines 59-Column 27 line 7: determined type of objects which is used to determine room type). Accordingly, the combination further teaches identifying contextual information of the one or more digital content included in the virtual environment, wherein the context of the virtual environment is determined based on the contextual information. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 5, Chaturvedi further discloses wherein determining a context of a virtual environment includes: individually evaluating each of the one or more digital content in the virtual environment for context; and collectively evaluating contextual information of the virtual environment based on individual evaluations of the one or more digital content to infer the context of the virtual environment (Column 3 lines 26-40, Column 5 lines 4-21, and Column 26 lines 59-Column 27 line 7: determined type of each object which is collectively used to determine room type). Accordingly, the combination further teaches wherein determining a context of the virtual environment includes: individually evaluating each of the one or more digital content in the virtual environment for context; and collectively evaluating contextual information of the virtual environment based on individual evaluations of the one or more digital content to infer the context of the virtual environment. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 6, Chaturvedi further discloses categorizing each of the one or more digital content based on predefined image classes, descriptions, and text-image pairings (Column 6 lines 35-54, Column 9 lines 7-13, Column 13 lines 14-21, Column 13 lines 62-66, and Column 26 lines 59-62: database of items is used to train model to identify the type of objects and room types including stored relationship tags for each type of item). Accordingly, the combination further teaches categorizing each of the one or more digital content based on predefined image classes, descriptions, and text-image pairings. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 7, Thompson discloses wherein the recommendation includes a content library recommendation (Column 11 lines 36-40: catalog of objects). 5. Claim(s) 3, 10-15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1) and further in view of Chaturvedi (US 11126845 B1) and Petill et al. (US 2021/0012113 A1). In regard to claim 3, the combination of Thompson and Chaturvedi teaches determining a context of the virtual environment based on the one or more digital content (The rejection of claim 1 is incorporated herein in its entirety. This limitation is a repeat of a limitation already recited in claim 1 and is rejected for similar reasons). While Thompson and Chaturvedi teach analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings and further teaches relationship tags for each type of item (Column 13 lines 62-66), they fail to explicitly show the captioning each of the one or more digital content based on the analyzing, as recited in the claims. Petill teaches analyzing similar to that of Chaturvedi. In addition, Petill further teaches captioning each of one or more content based on analyzing (Paragraph 0053 and Paragraph 0054: semantic tag is associated with an object based on its identified type through analyzing). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, and Petill before him before the effective filing date of the claimed invention, to modify the analyzing the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings and relationship tags for each type of item taught by Thompson and Chaturvedi to include the captioning each of one or more content based on analyzing of Petill, in order to obtain captioning each of the one or more digital content based on the analyzing. It would have been advantageous for one to utilize such a combination as storing a reference in a database so that the object can be re-found and the database can be continuously updated in real-time, as suggested by Petill (Paragraph 0085 lines 1-4 and Paragraph 0087). In regard to claim 10, Thompson discloses a system comprising: one or more processors; and a memory storing instructions which, when executed by the one or more processors, cause the system to (Fig. 12): receive an input, associated with a virtual environment, at a client device (Column 11 lines 32-40: search or text input element the receives text input); generate a recommendation for one or more digital content items aligned with the input (Column 11 lines 32-40 and Column 46 lines 1-8: the input text is used to search a catalog of objects where objects are identified based on the text input); and display, at the client device, the recommendation of the one or more digital content items (Column 46 lines 1-8: the identified objects are displayed in a user interface). While Thompson teaches generate a recommendation for one or more digital content items aligned with the input and further teaches filtering the recommended digital content items (Column 2 lines 42-46 and Column 17 lines 44-58) and recommended digital objects can be associated with one or more objects currently presented within the virtual environment (Column 46 lines 9-10), they fail to show the generating image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; generate captions associated with the one or more digital content included in the virtual environment; determine a context of the virtual environment based on the one or more digital content and the captions; generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment, as recited in the claims. Chaturvedi teaches a virtual environment (Chaturvedi Column 26 lines 59-61) similar to that of Thompson. In addition, Chaturvedi further teaches generating image embeddings based on an image representation of a virtual environment (Column 9 lines 1-11, Column 12 line 65 – Column 13 line 2, and Column 26 lines 59-Column 27 line 7: objects in image identified by coordinates); analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings (Column 9 lines 1-11, Column 13 lines 3-21, and Column 26 lines 59-Column 27 line 7: objects corresponding to the coordinates are identified, such as a couch); determine a context of the virtual environment based on the one or more digital content (Column 5 lines 4-21, Column 9 lines 1-11, Column 13 lines 3-21 and Column 26 lines 59-Column 27 line 7: the identified objects are used to determine a room type); and generate a recommendation for one or more digital content items aligned with the context of the virtual environment (Column 2 lines 49-57, Column 3 lines 27-55, and Column 26 lines 59-Column 27 line 7: items that are suitable for the determined room type are provided). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson and Chaturvedi before him before the effective filing date of the claimed invention, to modify the generate a recommendation for one or more digital content items aligned with the input, filtering the recommended digital content items, and recommended digital objects can be associated with one or more objects currently presented within the virtual environment taught by Thompson to include the generating image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determine a context of the virtual environment based on the one or more digital content; and generate a recommendation for one or more digital content items aligned with the context of the virtual environment of Chaturvedi, in order to obtain generating image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determine a context of the virtual environment based on the one or more digital content; generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). While Thompson and Chaturvedi teach analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings, determine a context of the virtual environment based on the one or more digital content, and further teaches relationship tags for each type of item for identifying (Column 13 lines 62-66), they fail to explicitly show the generate captions associated with the one or more digital content included in the virtual environment; and determine a context of the virtual environment based on the one or more digital content and the captions, as recited in the claims. Petill teaches analyzing similar to that of Chaturvedi. In addition, Petill further teaches captioning each of one or more content based on analyzing (Paragraph 0053 and Paragraph 0054: semantic tag is associated with an object based on its identified type through analyzing). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, and Petill before him before the effective filing date of the claimed invention, to modify the analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings, determine a context of the virtual environment based on the one or more digital content, and relationship tags for each type of item for identifying taught by Thompson and Chaturvedi to include the captioning each of one or more content based on analyzing of Petill, in order to obtain generate captions associated with the one or more digital content included in the virtual environment; and determine a context of the virtual environment based on the one or more digital content and the captions. It would have been advantageous for one to utilize such a combination as storing a reference in a database so that the object can be re-found and the database can be continuously updated in real-time, as suggested by Petill (Paragraph 0085 lines 1-4 and Paragraph 0087). In regard to claim 11, Thompson discloses wherein the input is a text query describing a desired digital content item to be added to the virtual environment (Column 11 lines 32-40: search or text input element the receives text input). In regard to claim 12, Chaturvedi further discloses identify contextual information of the one or more digital content included in a virtual environment, wherein the context of the virtual environment is determined based on the contextual information (Column 3 lines 26-40, Column 5 lines 4-21, and Column 26 lines 59-Column 27 line 7: determined type of objects which is used to determine room type). Accordingly, the combination further teaches wherein the one or more processors further execute instructions to identify contextual information of the one or more digital content included in the virtual environment, wherein the context of the virtual environment is determined based on the contextual information. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 13, Chaturvedi further discloses individually evaluating each of the one or more digital content in the virtual environment for context; and collectively evaluating contextual information of the virtual environment based on individual evaluations of the one or more digital content to infer the context of the virtual environment (Column 3 lines 26-40, Column 5 lines 4-21, and Column 26 lines 59-Column 27 line 7: determined type of each object which is collectively used to determine room type). Accordingly, the combination further teaches wherein the one or more processors further execute instructions to: individually evaluate each of the one or more digital content in the virtual environment for context; and collectively evaluate contextual information of the virtual environment based on individual evaluations of the one or more digital content to infer the context of the virtual environment. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 14, Chaturvedi further discloses categorize each of the one or more digital content based on predefined image classes, descriptions, and text-image pairings (Column 6 lines 35-54, Column 9 lines 7-13, Column 13 lines 14-21, Column 13 lines 62-66, and Column 26 lines 59-62: database of items is used to train model to identify the type of objects and room types including stored relationship tags for each type of item). Accordingly, the combination further teaches wherein the one or more processors further execute instructions to categorize each of the one or more digital content based on predefined image classes, descriptions, and text-image pairings. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). In regard to claim 15, Thompson discloses wherein the recommendation includes a content library recommendation (Column 11 lines 36-40: catalog of objects). In regard to claim 18, Thompson discloses a non-transient computer-readable storage medium having instructions embodied thereon, the instructions being executable by one or more processors to perform a method and cause the one or more processors to (Column 56 lines 22-34): receive an input, associated with a virtual environment, at a client device (Column 11 lines 32-40: search or text input element the receives text input); generate a recommendation for one or more digital content items aligned with the input (Column 11 lines 32-40 and Column 46 lines 1-8: the input text is used to search a catalog of objects where objects are identified based on the text input); and display, at the client device, the recommendation of the one or more digital content items (Column 46 lines 1-8: the identified objects are displayed in a user interface), wherein at least one digital content item is added to the virtual environment based on a user selection (Column 18 line 55 – Column 19 line 4: object is selected and added to the virtual environment). While Thompson teaches generate a recommendation for one or more digital content items aligned with the input and further teaches filtering the recommended digital content items (Column 2 lines 42-46 and Column 17 lines 44-58) and recommended digital objects can be associated with one or more objects currently presented within the virtual environment (Column 46 lines 9-10), they fail to show the generate image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; generate captions associated with the one or more digital content included in the virtual environment; determine a context of the virtual environment based on the one or more digital content and the captions; generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment, as recited in the claims. Chaturvedi teaches a virtual environment (Chaturvedi Column 26 lines 59-61) similar to that of Thompson. In addition, Chaturvedi further teaches generate image embeddings based on an image representation of a virtual environment (Column 9 lines 1-11, Column 12 line 65 – Column 13 line 2, and Column 26 lines 59-Column 27 line 7: objects in image identified by coordinates); analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings (Column 9 lines 1-11, Column 13 lines 3-21, and Column 26 lines 59-Column 27 line 7: objects corresponding to the coordinates are identified, such as a couch); determine a context of the virtual environment based on the one or more digital content (Column 5 lines 4-21, Column 9 lines 1-11, Column 13 lines 3-21 and Column 26 lines 59-Column 27 line 7: the identified objects are used to determine a room type); and generate a recommendation for one or more digital content items aligned with the context of the virtual environment (Column 2 lines 49-57, Column 3 lines 27-55, and Column 26 lines 59-Column 27 line 7: items that are suitable for the determined room type are provided). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson and Chaturvedi before him before the effective filing date of the claimed invention, to modify the generate a recommendation for one or more digital content items aligned with the input, filtering the recommended digital content items, and recommended digital objects can be associated with one or more objects currently presented within the virtual environment taught by Thompson to include the generate image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determine a context of the virtual environment based on the one or more digital content; and generate a recommendation for one or more digital content items aligned with the context of the virtual environment of Chaturvedi, in order to obtain generate image embeddings based on an image representation of the virtual environment; analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings; determine a context of the virtual environment based on the one or more digital content; generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment. It would have been advantageous for one to utilize such a combination as providing objects suitable for an intended application, as suggested by Chaturvedi (Column 1 lines 12-18, Column 2 lines 26-31, Column 2 lines 51-43, and Column 26 lines 59-62). While Thompson and Chaturvedi teach analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings, determine a context of the virtual environment based on the one or more digital content, and further teaches relationship tags for each type of item for identifying (Column 13 lines 62-66), they fail to explicitly show the generate captions associated with the one or more digital content included in the virtual environment; and determine a context of the virtual environment based on the one or more digital content and the captions, as recited in the claims. Petill teaches analyzing similar to that of Chaturvedi. In addition, Petill further teaches captioning each of one or more content based on analyzing (Paragraph 0053 and Paragraph 0054: semantic tag is associated with an object based on its identified type through analyzing). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, and Petill before him before the effective filing date of the claimed invention, to modify the analyze the virtual environment to identify one or more digital content included in the virtual environment based on the image embeddings, determine a context of the virtual environment based on the one or more digital content, and relationship tags for each type of item for identifying taught by Thompson and Chaturvedi to include the captioning each of one or more content based on analyzing of Petill, in order to obtain generate captions associated with the one or more digital content included in the virtual environment; and determine a context of the virtual environment based on the one or more digital content and the captions. It would have been advantageous for one to utilize such a combination as storing a reference in a database so that the object can be re-found and the database can be continuously updated in real-time, as suggested by Petill (Paragraph 0085 lines 1-4 and Paragraph 0087). 6. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1), Chaturvedi (US 11126845 B1), and further in view of Dascola et al. (US 2019/0228589 A1). In regard to claim 8, Thompson discloses further comprising: retrieving two-dimension images corresponding to the one or more digital content items; and displaying, in a selectable format, the two-dimensional images as the recommendation (Column 13 lines 31-35, Column 13 lines 60-63, Column 46 lines 1-8, and Column 18 lines 55-67: two-dimensional images of the objects are displayed and are selectable). While Thompson teaches two-dimension images, they fail to explicitly show the thumbnails, as recited in the claims. Dascola teaches digital content items similar to that of Thompson. In addition, Dascola further teaches two-dimensional images as thumbnails (Paragraph 0361: thumbnail as an example of a two-dimensional representation). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, and Dascola before him before the effective filing date of the claimed invention, to modify the two-dimension images taught by Thompson to include the two-dimensional images as thumbnails of Dascola, in order to obtain retrieving thumbnails corresponding to the one or more digital content items; and displaying, in a selectable format, the thumbnails as the recommendation. It would have been advantageous for one to utilize such a combination as a simple substitution. Simply substituting Thompson two-dimensional images with two-dimensional thumbnails would predictably result in displaying selectable representations corresponding to one or more digital content items. 7. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1), Chaturvedi (US 11126845 B1), and further in view of Ma et al. (US 11113893 B1). In regard to claim 9, while Thompson and Chaturvedi teach generating a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and displaying, at the client device, the recommendation of the one or more digital content items, they fail to show the ranking the one or more digital content items based on a relevance score associated with the context of the virtual environment; and displaying, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking, as recited in the claims. Ma teaches searching content items similar to that of Thompson and Chaturvedi. In addition, Ma further teaches ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking (Column 17 lines 7-21: content items matching a search criteria are given a similarity score based on the criteria and sorted based on the similarity score). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, and Ma before him before the effective filing date of the claimed invention, to modify the generating a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and displaying, at the client device, the recommendation of the one or more digital content items taught by Thompson and Chaturvedi to include the ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking of Ma, in order to obtain ranking the one or more digital content items based on a relevance score associated with the context of the virtual environment; and displaying, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking. It would have been advantageous for one to utilize such a combination as providing the most relevant content to the user, as suggested by Ma (Column 12 lines 25-28). 8. Claim(s) 16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1), Chaturvedi (US 11126845 B1), Petill et al. (US 2021/0012113 A1), and further in view of Dascola et al. (US 2019/0228589 A1). In regard to claim 16, Thompson discloses further comprising: retrieve two-dimension images corresponding to the one or more digital content items; and display, in a selectable format, the two-dimensional images as the recommendation (Column 13 lines 31-35, Column 13 lines 60-63, Column 46 lines 1-8, and Column 18 lines 55-67: two-dimensional images of the objects are displayed and are selectable). While Thompson teaches two-dimension images, they fail to explicitly show the thumbnails, as recited in the claims. Dascola teaches digital content items similar to that of Thompson. In addition, Dascola further teaches two-dimensional images as thumbnails (Paragraph 0361: thumbnail as an example of a two-dimensional representation). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, Petill, and Dascola before him before the effective filing date of the claimed invention, to modify the two-dimension images taught by Thompson to include the two-dimensional images as thumbnails of Dascola, in order to obtain wherein the one or more processors further execute instructions to: retrieve thumbnails corresponding to the one or more digital content items; and display, in a selectable format, the thumbnails as the recommendation. It would have been advantageous for one to utilize such a combination as a simple substitution. Simply substituting Thompson two-dimensional images with two-dimensional thumbnails would predictably result in displaying selectable representations corresponding to one or more digital content items. In regard to claim 19, Thompson discloses further comprising: retrieve two-dimension images corresponding to the one or more digital content items; and display, in a selectable format, the two-dimensional images as the recommendation (Column 13 lines 31-35, Column 13 lines 60-63, Column 46 lines 1-8, and Column 18 lines 55-67: two-dimensional images of the objects are displayed and are selectable). While Thompson teaches two-dimension images, they fail to explicitly show the thumbnails, as recited in the claims. Dascola teaches digital content items similar to that of Thompson. In addition, Dascola further teaches two-dimensional images as thumbnails (Paragraph 0361: thumbnail as an example of a two-dimensional representation). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, Petill, and Dascola before him before the effective filing date of the claimed invention, to modify the two-dimension images taught by Thompson to include the two-dimensional images as thumbnails of Dascola, in order to obtain wherein the instructions cause the one or more processors to: retrieve thumbnails corresponding to the one or more digital content items; and display, in a selectable format, the thumbnails as the recommendation. It would have been advantageous for one to utilize such a combination as a simple substitution. Simply substituting Thompson two-dimensional images with two-dimensional thumbnails would predictably result in displaying selectable representations corresponding to one or more digital content items. 9. Claim(s) 17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thompson et al. (US 11126320 B1), Chaturvedi (US 11126845 B1), Petill et al. (US 2021/0012113 A1), and further in view of Ma et al. (US 11113893 B1). In regard to claim 17, while Thompson and Chaturvedi teach generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and display, at the client device, the recommendation of the one or more digital content items, they fail to show the rank the one or more digital content items based on a relevance score associated with the context of the virtual environment; and display, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking, as recited in the claims. Ma teaches searching content items similar to that of Thompson and Chaturvedi. In addition, Ma further teaches ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking (Column 17 lines 7-21: content items matching a search criteria are given a similarity score based on the criteria and sorted based on the similarity score). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, Petill, and Ma before him before the effective filing date of the claimed invention, to modify the generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and display, at the client device, the recommendation of the one or more digital content items taught by Thompson and Chaturvedi to include the ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking of Ma, in order to obtain rank the one or more digital content items based on a relevance score associated with the context of the virtual environment; and display, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking. It would have been advantageous for one to utilize such a combination as providing the most relevant content to the user, as suggested by Ma (Column 12 lines 25-28). In regard to claim 20, while Thompson and Chaturvedi teach generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and display, at the client device, the recommendation of the one or more digital content items, they fail to show the rank the one or more digital content items based on a relevance score associated with the context of the virtual environment; and display, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking, as recited in the claims. Ma teaches searching content items similar to that of Thompson and Chaturvedi. In addition, Ma further teaches ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking (Column 17 lines 7-21: content items matching a search criteria are given a similarity score based on the criteria and sorted based on the similarity score). It would have been obvious to one of ordinary skill in the art, having the teachings of Thompson, Chaturvedi, Petill, and Ma before him before the effective filing date of the claimed invention, to modify the generate a recommendation for one or more digital content items aligned with the input based on the context of the virtual environment and display, at the client device, the recommendation of the one or more digital content items taught by Thompson and Chaturvedi to include the ranking one or more digital content items based on a relevance score associated with search criteria; and providing, the one or more digital content items in an order based on the ranking of Ma, in order to obtain rank the one or more digital content items based on a relevance score associated with the context of the virtual environment; and display, in a selectable format, the recommendation of the one or more digital content items in an order based on the ranking. It would have been advantageous for one to utilize such a combination as providing the most relevant content to the user, as suggested by Ma (Column 12 lines 25-28). Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S ULRICH whose telephone number is (571)270-1397. The examiner can normally be reached M-F 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571)272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 11. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Nicholas Ulrich/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Oct 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602239
SYSTEMS AND METHODS FOR RESOLVING AN ERROR CONDITION OF A MEDICAL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12578844
DATA PROCESSING METHOD FOR INTERACTION MODE SWITCHING, ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572264
GRAPHICAL USER INTERFACE FOR ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) COGNITIVE SIGNALS ANALYSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12572255
NOTIFICATION MESSAGE DISPLAY METHOD AND APPARATUS, DEVICE, READABLE STORAGE MEDIUM, AND CHIP
2y 5m to grant Granted Mar 10, 2026
Patent 12561509
Page Layout Method and Apparatus
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
77%
With Interview (+7.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month