Prosecution Insights
Last updated: April 19, 2026
Application No. 18/586,750

SYSTEMS AND METHODS FOR GENERATING CONTEXT-AWARE CONTENT ACROSS APPLICATIONS

Non-Final OA §102
Filed
Feb 26, 2024
Examiner
HOPE, DARRIN
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
270 granted / 449 resolved
+5.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
34 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is responsive to the communications filed on 26 February 2024. Claim 1-20 are pending. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Saleh (US 2025/0181215 A1). Per claim 1, Saleh discloses a computing system for generating context-aware content (e.g., Computing device 110 as shown in Fig. 1; Abstract; paragraph[0064]), the computing system comprising: one or more processors(e.g., processing system 902 as shown in Fig. 9; paragraph[0101], “Referring still to FIG. 9, processing system 902 may comprise a micro-processor and other circuitry that retrieves and executes software 905 from storage system 903 …”); and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors(e.g., storage system 903 as shown in Fig. 9; paragraph [0102], “Storage system 903 may comprise any computer readable storage media readable by processing system 902 and capable of storing software 905 …”), cause the computing system to perform operations, the operations comprising: providing a user interface (e.g., step 201 as shown in Fig. 2; paragraph [0050], ”The computing device displays a content canvas which is populated with content (step 201)…”; user interface 121 as shown in Fig. 1; paragraph [0064], “… Computing device 110 runs application 120 including causing a local user experience to be displayed via user interface 121 ... “) to a user computing system (e.g., Computing device 110 as shown in Fig. 1; paragraph [0064]), the user interface including a first content generation environment (e.g., board 132 as shown in Fig. 1; paragraph [0065], “In operational environment 100, application 120 displays board 132 including content items 136 in user interface 121. A user interacts with application 120 to generate content for board 132 ..”.) and a model interaction environment (e.g., suggest button 133 as shown in Fig. 1; paragraph [0065], “… Application 120 displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by foundation model 150 based on the board state of board 132. “) within a first item of a first application(e.g., application 120 as shown in Fig. 1; paragraph [0064]); providing first content associated with the first application automatically to a generative model(e.g., steps 203-205 as shown in Fig. 2; paragraph [0017], “… In an implementation, the application prompts a foundation model to provide suggestions for enhancing or improving the content of a content board or canvas. Rather than waiting for a user to request assistance with his or her content, the application automatically and proactively captures information from the canvas and supplies it to the foundation model to generate suggestions ....”; paragraph [0052], “ The computing device captures a state of the content canvas (step 203). In an implementation, the computing device captures the canvas or board state (i.e., contextual information from the canvas or board) for a prompt for the foundation model ... “; paragraph [0053], “Continuing with process 200, the computing device generates a prompt for a foundation model for follow-on prompts for generating enhancements to the content based on contextual information (step 205)….”; Examiner’s Note: Saleh discloses proactively and continually prompting a generative model to suggest content. ); receiving context-aware content generated by the generative model based at least in part on the first content(e.g., step 207 as shown in Fig. 2; paragraph [0058], “The computing device displays suggestion components corresponding to the follow-on prompts in the user interface (step 207). In an implementation, the computing device displays natural language suggestions for each of the follow-on prompts generated by the foundation model …” ); and presenting the context-aware content using the model interaction environment of the user interface(e.g., steps 209-211 as shown in Fig. 2; paragraph [0060], “When the computing device receives user input selecting a suggestion component, the computing device sends the corresponding follow-on prompt to the foundation model (step 209)…”; . paragraph [0062], “When the computing device receives the reply from the foundation model, the computing device populates the whiteboard with the enhancement (step 211)…”; Examiner’s Note: Saleh discloses displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by generative or foundation model 150 based on the context of board 132. ). Per claim 2, Saleh discloses the computing system of claim 1, wherein the first content comprises user-generated content within the first content generation environment (e.g., step 201 as shown in Fig. 2;paragraph [0027]; paragraph [0050]; paragraph [0086]; Examiner’s Note: Saleh discloses an application allowing a user to add content items to a board, i.e., a first content generation environment.), and wherein the context-aware content comprises a summary of the first content (paragraph [0062], “… If the foundation model returns a summary of the contents of the canvas, the computing device may display the summary in a specially formatted and labeled textbox on the project canvas …. “). Per claim 3, Saleh discloses the computing system of claim 1, wherein the first content comprises at least one of user-generated content within the first content generation environment (e.g., content items 136 as shown in Fig. 1 as shown in Fig. 1; paragraph [0046], “ A brief operational scenario of operational environment 100 follows. A user of computing device 110 interacts with application 120 hosting board 132 displayed in user interface 121. User experiences 131(a), 131(b), and 131(c) include Suggest button 133 and content items 136 …. “) or functionalities of the first application (e.g., suggest button 133 as shown in Fig. 1; paragraph [0065], “ … Application 120 displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by foundation model 150 based on the board state of board 132. “ ), and wherein the context-aware content comprises a suggested prompt submittable to the generative model upon request by a user (paragraph [0066], “Virtual assistant 122 prompts foundation model 150 to generate follow-on prompts which will task foundation model 150 with generating content for enhancing or improving the content of board 132…”). Per claim 4, Saleh discloses the computing system of claim 3, the operations further comprising: receiving a request from the user via the user interface to submit the suggested prompt to the generative model(e.g., step 201 as shown in Fig. 2; paragraph [0050], ”The computing device displays a content canvas which is populated with content (step 201)…”; user interface 121 as shown in Fig. 1; paragraph [0064], “… Computing device 110 runs application 120 including causing a local user experience to be displayed via user interface 121 ... “; Examiner’s Note: Saleh proactively and continually captures contextual information and sends a prompt to a generative model as the user interacts with content in the user interface.); providing the suggested prompt to the generative model l(e.g., steps 203-205 as shown in Fig. 2; paragraph [0017], “… In an implementation, the application prompts a foundation model to provide suggestions for enhancing or improving the content of a content board or canvas. Rather than waiting for a user to request assistance with his or her content, the application automatically and proactively captures information from the canvas and supplies it to the foundation model to generate suggestions ....”; paragraph [0052], “ The computing device captures a state of the content canvas (step 203). In an implementation, the computing device captures the canvas or board state (i.e., contextual information from the canvas or board) for a prompt for the foundation model ... “; paragraph [0053], “Continuing with process 200, the computing device generates a prompt for a foundation model for follow-on prompts for generating enhancements to the content based on contextual information (step 205)….”; Examiner’s Note: Saleh discloses proactively and continually prompting a generative model to suggest content. ); receiving responsive content generated by the generative model in response to providing the suggested prompt to the generative model(e.g., step 207 as shown in Fig. 2; paragraph [0058], “The computing device displays suggestion components corresponding to the follow-on prompts in the user interface (step 207). In an implementation, the computing device displays natural language suggestions for each of the follow-on prompts generated by the foundation model …” ); and presenting the responsive content via the user interface(e.g., steps 209-211 as shown in Fig. 2; paragraph [0060], “When the computing device receives user input selecting a suggestion component, the computing device sends the corresponding follow-on prompt to the foundation model (step 209)…”; . paragraph [0062], “When the computing device receives the reply from the foundation model, the computing device populates the whiteboard with the enhancement (step 211)…”; Examiner’s Note: Saleh discloses displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by generative or foundation model 150 based on the context of board 132. ). Per claim 5, Saleh discloses the computing system of claim 1, the operations further comprising: receiving a user-generated prompt via the model interaction environment of the first application(e.g., user clicks the Suggest button 521 as shown in Fig. 5A; paragraph [0080], “In user experience 510(a) of FIG. 5A, a user clicks the Suggest button 521 to display titles of the follow-on prompts in the form of natural language suggestions for enhancing the content of a project displayed on board 522….” ); and submitting the user-generated prompt to the generative model (paragraph [0081], ” The foundation model is also prompted to generate titles for each follow-on prompt which will be displayed to the user. In various implementations, the follow-on prompts are not displayed to the user, so the user's selection is based on the titles. The foundation model may be tasked with generating multiple different follow-on prompts, (e.g., at least three but no more than five) and to limit the size (e.g., character or word length) of the titles for display.”), (e.g., user experience 510(b) as shown in FIG. 5B; paragraph [0083]), wherein providing the first content associated with the first application automatically to the generative model comprises providing functionalities of the first application associated with the first application automatically to the generative model(e.g., menu 134 as shown in Fig. 1; paragraph [0046], “… When the user clicks or selects Suggest button 133, virtual assistant 122 causes menu 134 to be displayed which includes suggestion components labeled with titles that were generated by foundation model 150 based on contextual information of board 132, such as the text content of content items 136. In menu 134, the user can select a suggestion which will cause virtual assistant 122 to prompt foundation model 150 to create content in accordance with the suggestion. The user can also delete suggestion components which the user deems unsuitable. “). Per claim 6, Saleh discloses the computing system of claim 5, wherein providing the first content associated with the first application automatically to the generative model comprises providing the first content associated with the first application automatically to the generative model when submitting the user-generated prompt to the generative model (paragraph [0079], “FIGS. 5A-5E illustrate user experiences of operational scenario 500 for a proactive prompting for content generation via a foundation model integration of an application, such as a project planning application, in an implementation. In user experience 510(a) of FIG. 5A, the application generates a prompt for submission to a foundation model which tasks the foundation model with suggesting follow-on prompts which task the model with generating suggestions or ideas for enhancing the content of board 522.”). Per claim 7, Saleh discloses the computing system of claim 5, wherein presenting the context-aware content comprises presenting the context-aware content with a reference to a source used by the generative model to create the context-aware content (paragraph [0020]; paragraph[0088], “Board state 611 can include information and/or metadata for notes (e.g., virtual sticky notes), posts, or other content items on the board, such as the text content of the content items, the authors of the various content items, a revision history of the content items, and reactions of other users (e.g., emojis) to the content items ….”; paragraph [0052]; Examiner’s Note: Saleh discloses presenting metadata). Per claim 8, Saleh discloses the computing system of claim 7, wherein the user-generated prompt specifies the source (paragraph [0020], “The board state includes information and/or metadata for notes (e.g., virtual sticky notes), posts, or other content items on the board, such as the text content of the content items, the authors of the various content items, a record of modifications to the content items, and reactions of other users (e.g., emojis) to the content items.”; paragraph [0088]). Per claim 9, Saleh discloses the computing system of claim 7,, wherein the source is another item within the first application or an item of another application, the first application and the other application being part of a productivity suite (paragraph [0032], “In various implementations, technology for proactive prompting as disclosed herein may be implemented in project-planning or collaboration applications, but also productivity applications, such as word-processing applications, presentation applications, note-taking applications, or other applications which support environments for content generation and ideation.”). Per claim 10, Saleh discloses a computer-implemented method for generating context-aware content, the computer-implemented method comprising: providing, by a computing system, a user interface (e.g., step 201 as shown in Fig. 2; paragraph [0050], ”The computing device displays a content canvas which is populated with content (step 201)…”; user interface 121 as shown in Fig. 1; paragraph [0064], “… Computing device 110 runs application 120 including causing a local user experience to be displayed via user interface 121 ... “) to a user computing system (e.g., Computing device 110 as shown in Fig. 1; paragraph [0064]), the user interface including a first content generation environment (e.g., board 132 as shown in Fig. 1; paragraph [0065], “In operational environment 100, application 120 displays board 132 including content items 136 in user interface 121. A user interacts with application 120 to generate content for board 132 ..”.) and a model interaction environment (e.g., suggest button 133 as shown in Fig. 1; paragraph [0065], “… Application 120 displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by foundation model 150 based on the board state of board 132. “) within a first item of a first application(e.g., application 120 as shown in Fig. 1; paragraph [0064]); providing first content associated with the first application automatically to a generative model(e.g., steps 203-205 as shown in Fig. 2; paragraph [0017], “… In an implementation, the application prompts a foundation model to provide suggestions for enhancing or improving the content of a content board or canvas. Rather than waiting for a user to request assistance with his or her content, the application automatically and proactively captures information from the canvas and supplies it to the foundation model to generate suggestions ....”; paragraph [0052], “ The computing device captures a state of the content canvas (step 203). In an implementation, the computing device captures the canvas or board state (i.e., contextual information from the canvas or board) for a prompt for the foundation model ... “; paragraph [0053], “Continuing with process 200, the computing device generates a prompt for a foundation model for follow-on prompts for generating enhancements to the content based on contextual information (step 205)….”; Examiner’s Note: Saleh discloses proactively and continually prompting a generative model to suggest content. ); receiving, by a computing system, context-aware content generated by the generative model based at least in part on the first content(e.g., step 207 as shown in Fig. 2; paragraph [0058], “The computing device displays suggestion components corresponding to the follow-on prompts in the user interface (step 207). In an implementation, the computing device displays natural language suggestions for each of the follow-on prompts generated by the foundation model …” ); and presenting, by a computing system, the context-aware content using the model interaction environment of the user interface(e.g., steps 209-211 as shown in Fig. 2; paragraph [0060], “When the computing device receives user input selecting a suggestion component, the computing device sends the corresponding follow-on prompt to the foundation model (step 209)…”; . paragraph [0062], “When the computing device receives the reply from the foundation model, the computing device populates the whiteboard with the enhancement (step 211)…”; Examiner’s Note: Saleh discloses displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by generative or foundation model 150 based on the context of board 132. ). Per claim 11, Saleh discloses the computer-implemented method of claim 10, wherein providing the first content comprises providing user-generated content within the first content generation environment(e.g., step 201 as shown in Fig. 2;paragraph [0027]; paragraph [0050]; paragraph [0086]; Examiner’s Note: Saleh discloses an application allowing a user to add content items to a board, i.e., a first content generation environment.), and wherein receiving the context-aware content comprises receiving a summary of the first content(paragraph [0062], “… If the foundation model returns a summary of the contents of the canvas, the computing device may display the summary in a specially formatted and labeled textbox on the project canvas …. “). Per claim 12, Saleh discloses the computer-implemented method of claim 10, wherein providing the first content comprises providing at least one of user-generated content within the first content generation environment (e.g., content items 136 as shown in Fig. 1 as shown in Fig. 1; paragraph [0046], “ A brief operational scenario of operational environment 100 follows. A user of computing device 110 interacts with application 120 hosting board 132 displayed in user interface 121. User experiences 131(a), 131(b), and 131(c) include Suggest button 133 and content items 136 …. “) or functionalities of the first application, and wherein receiving the context-aware content comprises receiving a suggested prompt submittable to the generative model upon request by a user (paragraph [0066], “Virtual assistant 122 prompts foundation model 150 to generate follow-on prompts which will task foundation model 150 with generating content for enhancing or improving the content of board 132…”). Per claim 13, Saleh discloses the computer-implemented method of claim 12, further comprising: receiving, by the computing system, a request from the user via the user interface to submit the suggested prompt to the generative model(e.g., step 201 as shown in Fig. 2; paragraph [0050], ”The computing device displays a content canvas which is populated with content (step 201)…”; user interface 121 as shown in Fig. 1; paragraph [0064], “… Computing device 110 runs application 120 including causing a local user experience to be displayed via user interface 121 ... “; Examiner’s Note: Saleh proactively and continually captures contextual information and sends a prompt to a generative model as the user interacts with content in the user interface.); providing, by the computing system, the suggested prompt to the generative model l(e.g., steps 203-205 as shown in Fig. 2; paragraph [0017], “… In an implementation, the application prompts a foundation model to provide suggestions for enhancing or improving the content of a content board or canvas. Rather than waiting for a user to request assistance with his or her content, the application automatically and proactively captures information from the canvas and supplies it to the foundation model to generate suggestions ....”; paragraph [0052], “ The computing device captures a state of the content canvas (step 203). In an implementation, the computing device captures the canvas or board state (i.e., contextual information from the canvas or board) for a prompt for the foundation model ... “; paragraph [0053], “Continuing with process 200, the computing device generates a prompt for a foundation model for follow-on prompts for generating enhancements to the content based on contextual information (step 205)….”; Examiner’s Note: Saleh discloses proactively and continually prompting a generative model to suggest content. ); receiving, by the computing system, responsive content generated by the generative model in response to providing the suggested prompt to the generative model(e.g., step 207 as shown in Fig. 2; paragraph [0058], “The computing device displays suggestion components corresponding to the follow-on prompts in the user interface (step 207). In an implementation, the computing device displays natural language suggestions for each of the follow-on prompts generated by the foundation model …” ); and presenting, by the computing system, the responsive content via the user interface(e.g., steps 209-211 as shown in Fig. 2; paragraph [0060], “When the computing device receives user input selecting a suggestion component, the computing device sends the corresponding follow-on prompt to the foundation model (step 209)…”; . paragraph [0062], “When the computing device receives the reply from the foundation model, the computing device populates the whiteboard with the enhancement (step 211)…”; Examiner’s Note: Saleh discloses displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by generative or foundation model 150 based on the context of board 132. ). Per claim 14, Saleh discloses the computer-implemented method of claim 10, further comprising: receiving, by the computing system, a user-generated prompt via the model interaction environment of the first application(e.g., user clicks the Suggest button 521 as shown in Fig. 5A; paragraph [0080], “In user experience 510(a) of FIG. 5A, a user clicks the Suggest button 521 to display titles of the follow-on prompts in the form of natural language suggestions for enhancing the content of a project displayed on board 522….” ); and submitting, by the computing system, the user-generated prompt to the generative model (paragraph [0081], ” The foundation model is also prompted to generate titles for each follow-on prompt which will be displayed to the user. In various implementations, the follow-on prompts are not displayed to the user, so the user's selection is based on the titles. The foundation model may be tasked with generating multiple different follow-on prompts, (e.g., at least three but no more than five) and to limit the size (e.g., character or word length) of the titles for display.”), (e.g., user experience 510(b) as shown in FIG. 5B; paragraph [0083]), wherein providing the first content associated with the first application automatically to the generative model comprises providing functionalities of the first application associated with the first application automatically to the generative model(e.g., menu 134 as shown in Fig. 1; paragraph [0046], “… When the user clicks or selects Suggest button 133, virtual assistant 122 causes menu 134 to be displayed which includes suggestion components labeled with titles that were generated by foundation model 150 based on contextual information of board 132, such as the text content of content items 136. In menu 134, the user can select a suggestion which will cause virtual assistant 122 to prompt foundation model 150 to create content in accordance with the suggestion. The user can also delete suggestion components which the user deems unsuitable. “). Per claim 15, Saleh discloses the computer-implemented method of claim 14, wherein presenting the context-aware content comprises presenting the context-aware content with a reference to a source used by the generative model to create the context-aware content (paragraph [0020]; paragraph[0088], “Board state 611 can include information and/or metadata for notes (e.g., virtual sticky notes), posts, or other content items on the board, such as the text content of the content items, the authors of the various content items, a revision history of the content items, and reactions of other users (e.g., emojis) to the content items ….”; paragraph [0052]; Examiner’s Note: Saleh discloses presenting metadata). Per claim 16, Saleh discloses one or more non-transitory computer-readable media that collectively store instructions that, when executed by one or more computing devices e.g., storage system 903 as shown in Fig. 9; paragraph [0102], “Storage system 903 may comprise any computer readable storage media readable by processing system 902 and capable of storing software 905 …”), cause the one or more computing devices to perform operations, the operations comprising: providing a user interface (e.g., step 201 as shown in Fig. 2; paragraph [0050], ”The computing device displays a content canvas which is populated with content (step 201)…”; user interface 121 as shown in Fig. 1; paragraph [0064], “… Computing device 110 runs application 120 including causing a local user experience to be displayed via user interface 121 ... “) to a user computing system (e.g., Computing device 110 as shown in Fig. 1; paragraph [0064]), the user interface including a first content generation environment (e.g., board 132 as shown in Fig. 1; paragraph [0065], “In operational environment 100, application 120 displays board 132 including content items 136 in user interface 121. A user interacts with application 120 to generate content for board 132 ..”.) and a model interaction environment (e.g., suggest button 133 as shown in Fig. 1; paragraph [0065], “… Application 120 displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by foundation model 150 based on the board state of board 132. “) within a first item of a first application(e.g., application 120 as shown in Fig. 1; paragraph [0064]); providing first content associated with the first application automatically to a generative model(e.g., steps 203-205 as shown in Fig. 2; paragraph [0017], “… In an implementation, the application prompts a foundation model to provide suggestions for enhancing or improving the content of a content board or canvas. Rather than waiting for a user to request assistance with his or her content, the application automatically and proactively captures information from the canvas and supplies it to the foundation model to generate suggestions ....”; paragraph [0052], “ The computing device captures a state of the content canvas (step 203). In an implementation, the computing device captures the canvas or board state (i.e., contextual information from the canvas or board) for a prompt for the foundation model ... “; paragraph [0053], “Continuing with process 200, the computing device generates a prompt for a foundation model for follow-on prompts for generating enhancements to the content based on contextual information (step 205)….”; Examiner’s Note: Saleh discloses proactively and continually prompting a generative model to suggest content. ); receiving context-aware content generated by the generative model based at least in part on the first content(e.g., step 207 as shown in Fig. 2; paragraph [0058], “The computing device displays suggestion components corresponding to the follow-on prompts in the user interface (step 207). In an implementation, the computing device displays natural language suggestions for each of the follow-on prompts generated by the foundation model …” ); and presenting the context-aware content using the model interaction environment of the user interface(e.g., steps 209-211 as shown in Fig. 2; paragraph [0060], “When the computing device receives user input selecting a suggestion component, the computing device sends the corresponding follow-on prompt to the foundation model (step 209)…”; . paragraph [0062], “When the computing device receives the reply from the foundation model, the computing device populates the whiteboard with the enhancement (step 211)…”; Examiner’s Note: Saleh discloses displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by generative or foundation model 150 based on the context of board 132. ). Per claim 17, Saleh discloses the one or more non-transitory computer-readable media of claim 16, wherein the first content comprises user-generated content within the first content generation environment (e.g., step 201 as shown in Fig. 2;paragraph [0027]; paragraph [0050]; paragraph [0086]; Examiner’s Note: Saleh discloses an application allowing a user to add content items to a board, i.e., a first content generation environment.), and wherein the context-aware content comprises a summary of the first content (paragraph [0062], “… If the foundation model returns a summary of the contents of the canvas, the computing device may display the summary in a specially formatted and labeled textbox on the project canvas …. “). Per claim 18, Saleh discloses the one or more non-transitory computer-readable media of claim 16, wherein the first content comprises at least one of user-generated content within the first content generation environment (e.g., content items 136 as shown in Fig. 1 as shown in Fig. 1; paragraph [0046], “ A brief operational scenario of operational environment 100 follows. A user of computing device 110 interacts with application 120 hosting board 132 displayed in user interface 121. User experiences 131(a), 131(b), and 131(c) include Suggest button 133 and content items 136 …. “) or functionalities of the first application (e.g., suggest button 133 as shown in Fig. 1; paragraph [0065], “ … Application 120 displays Suggest button 133 which invokes virtual assistant 122 to display suggestions generated by foundation model 150 based on the board state of board 132. “ ), and wherein the context-aware content comprises a suggested prompt submittable to the generative model upon request by a user (paragraph [0066], “Virtual assistant 122 prompts foundation model 150 to generate follow-on prompts which will task foundation model 150 with generating content for enhancing or improving the content of board 132…”). Per claim 19, Saleh discloses the one or more non-transitory computer-readable media of claim 16, the operations further comprising: receiving a user-generated prompt via the model interaction environment of the first application(e.g., user clicks the Suggest button 521 as shown in Fig. 5A; paragraph [0080], “In user experience 510(a) of FIG. 5A, a user clicks the Suggest button 521 to display titles of the follow-on prompts in the form of natural language suggestions for enhancing the content of a project displayed on board 522….” ); and submitting the user-generated prompt to the generative model (paragraph [0081], ” The foundation model is also prompted to generate titles for each follow-on prompt which will be displayed to the user. In various implementations, the follow-on prompts are not displayed to the user, so the user's selection is based on the titles. The foundation model may be tasked with generating multiple different follow-on prompts, (e.g., at least three but no more than five) and to limit the size (e.g., character or word length) of the titles for display.”), (e.g., user experience 510(b) as shown in FIG. 5B; paragraph [0083]), wherein providing the first content associated with the first application automatically to the generative model comprises providing functionalities of the first application associated with the first application automatically to the generative model(e.g., menu 134 as shown in Fig. 1; paragraph [0046], “… When the user clicks or selects Suggest button 133, virtual assistant 122 causes menu 134 to be displayed which includes suggestion components labeled with titles that were generated by foundation model 150 based on contextual information of board 132, such as the text content of content items 136. In menu 134, the user can select a suggestion which will cause virtual assistant 122 to prompt foundation model 150 to create content in accordance with the suggestion. The user can also delete suggestion components which the user deems unsuitable. “). Per claim 20, Saleh discloses the one or more non-transitory computer-readable media of claim 19, wherein presenting the context-aware content comprises presenting the context-aware content with a reference to a source used by the generative model to create the context-aware content(paragraph [0020]; paragraph[0088], “Board state 611 can include information and/or metadata for notes (e.g., virtual sticky notes), posts, or other content items on the board, such as the text content of the content items, the authors of the various content items, a revision history of the content items, and reactions of other users (e.g., emojis) to the content items ….”; paragraph [0052]; Examiner’s Note: Saleh discloses presenting metadata). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. West et al. ( US 2024/0320714 A1 ) - A method, non-transitory computer readable medium, apparatus, and system for contextual content generation are described. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARRIN HOPE whose telephone number is (571)270-5079. The examiner can normally be reached Mon-Thr - 6:45-4:15, Fri - 6:45-3:15, Alt. Fri Off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen S Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DARRIN HOPE Examiner Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582498
PROCESSING OF VIDEO STREAMS RELATED TO SURGICAL OPERATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12578757
CONTINUITY OF APPLICATIONS ACROSS DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12547431
DATA STORAGE AND RETRIEVAL SYSTEM FOR SUBDIVIDING UNSTRUCTURED PLATFORM-AGNOSTIC USER INPUT INTO PLATFORM-SPECIFIC DATA OBJECTS AND DATA ENTITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12547300
USER INTERFACES RELATED TO TIME
2y 5m to grant Granted Feb 10, 2026
Patent 12541563
INSTRUMENTATION OF SOFT NAVIGATION ELEMENTS OF WEB PAGE APPLICATIONS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
79%
With Interview (+19.3%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month