Prosecution Insights
Last updated: April 19, 2026
Application No. 17/961,223

DISCOVERING DIGITAL ASSISTANT TASKS

Final Rejection §103
Filed
Oct 06, 2022
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Amendment filed on October 30, 2025. Claims 1-23 are pending in the case. Claims 1, 2, 5, 6, 8, 11, 13, 14,17, 19, and 21-23 are amended. Claims 1, 22, and 23 are the independent claims. This action is final. Applicant’s Response In the amendment field on October 30, 2025, Applicant amended the claims and provided arguments in response to the rejections of the claims under 35 USC 102, 103, and 112 in the previous office action. Response to Argument/Amendment Applicant’s arguments in response to the rejection of the claims under 35 USC 112 in the previous office action are acknowledged. Because Applicant’s amendments to the claims clarify the claims so that they are no longer indefinite, the rejection is withdrawn. Applicant’s arguments in response to the rejections of the claims under 35 USC 102 and 103 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant argues that Kannan, Novitchenko, and Gray are silent with regard to the newly recited limitation “wherein the context of the electronic device includes a content displayed on a display of the electronic device that is reference by the request,” in the amended independent claims. Applicant’s argument is persuasive, and the rejection under 35 USC 102 is withdrawn. However, new grounds of rejection are provided below. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1, 7, 11-14, 18, and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Kannan et al. (US 20160189717 A1) in view of Trufinescu et al. (US 20180131643 A1). With respect to claims 1, 22, and 23, Kannan teaches an electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a method; a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for performing a method; and the method, comprising: at an electronic device with one or more processors and memory (e.g. paragraph 0078, computing system including processing units and memory; paragraph s0098-0099, computer-executable instructions stored on computer-readable storage media and executed on computing device; memory/storage): receiving a user input including a request for potential tasks executable by a digital assistant and a context of the electronic device (e.g. paragraph 0028, extensible digital personal assistant not limited to operating system features/services and is extended to support third party voice enabled applications; paragraph 0041, range of tasks that can be performed on behalf of user by digital assistant extended to include commands for performing available tasks of third party voice enabled applications; paragraph 0049, user is trying to discover which tasks a particular application can perform; paragraph 0051, user providing voice input as shown at 325 of Fig. 3; user voice request in this example is “Which tasks can <application> perform?” as depicted at 330; paragraph 0065, Fig. 6, at 610 a digital voice input is received by a voice-controlled digital personal assistant; paragraph 0066, at 620, determining user voice request for available tasks capable of being performed by voice-enabled application; i.e. receiving a request for potential tasks which the assistant can perform, including tasks which the assistant can perform via another application, where the request additionally includes contextual information identifying a particular application which the user is requesting commands for); in response to receiving the request: determining, based on the context included in the request, a textual representation of an utterance for performing a task (e.g. paragraph 0040, voice command data structure identifying voice enabled applications and supported commands along with associated voice command variations and examples; paragraph 0052, Fig. 3, assistant determining which tasks the particular application is capable of performing, including example voice commands along with voice examples illustrating how the user can use a given voice command as shown at 340; controlling which voice commands are displayed, which variations etc.; paragraph 0067, at 630, available tasks capable of being performed by voice-enabled application are identified; paragraph 0067, associated voice command variations and voice command examples; i.e. determining which voice commands and variations are available for the particular application (as provided with the request as contextual information) and how the commands/variations should be presented in textual form on a user interface); and providing the textual representation of the utterance for performing the task in an affordance displayed over a user interface on the display of the electronic device (e.g. paragraph 0052, Fig. 3, third graphical user interface displayed listing tasks at 340 including example voice commands/variations; paragraph 0068, at 640, response provided to the user via GUI identifying available tasks, such as displayed list of tasks along with associated voice command variations and voice command examples). Kannan does not explicitly disclose wherein the context of the electronic device includes a content displayed on a display of the electronic device that is referenced by the request. However, Trufinescu teaches wherein the context of the electronic device includes a content displayed on a display of the electronic device that is referenced by the request (e.g. paragraph 0024, Fig. 3, user clicking on graphical selector for MOVIE1, and session state of application user interface changed to present specific content for MOVIE1; paragraph 0030, Fig. 3, query 48 from user includes word or phrase and the bot client program is unable to resolve the meaning of the word or phrase within threshold confidence level, and in response, sending context request to application program; example query 48A includes the phrase “When was this movie made?”; paragraph 0032, determining whether query is directed to content related to the session state of the application program; paragraph 0033, sending context request to application program; application program sending current context data, which may include current content 38 presented via the application user interface of the application program; paragraph 0034, Fig. 3, query is “When was this movie made?” and context data includes content for the Movie 1 that is currently being presented by the application user interface). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan and Trufinescu in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources), to incorporate the teachings of Trufinescu (directed to application context aware chatbots) to include the capability to include, in the context, content displayed on a display of the electronic device that is referenced by the request (as taught by Trufinescu). One of ordinary skill would have been motivated to perform such a modification in order to enable contextual information about application programs to be communicated to and leveraged by chatbots to process user queries with greater effectiveness and the functionality of the application program and chatbot are extended as described in Trufinescu (paragraph 0010). With respect to claim 7, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein the task is a first task, and wherein the one or more programs further include instructions for: determining, based on the context included in the request, a second textual representation of a second utterance for performing a second task, wherein the second task is different from the first task (e.g. paragraph 0040, voice command data structure identifying voice enabled applications and supported commands along with associated voice command variations and examples; paragraph 0052, Fig. 3, assistant determining which tasks the particular application is capable of performing, including example voice commands along with voice examples illustrating how the user can use a given voice command as shown at 340; controlling which voice commands are displayed, which variations etc.; paragraph 0067, at 630, available tasks capable of being performed by voice-enabled application are identified; paragraph 0067, associated voice command variations and voice command examples; as shown in Fig. 3 at 324/340, the determined list of tasks for the particular application includes at least first and second different tasks along with corresponding commands/utterances for performing the respective tasks, such as a first task to add a movie to a queue in the application along with a corresponding command and a second task to play a movie in the application along with a corresponding command). With respect to claim 11, Kannan in view of Trufinescu teaches all of the limitations of claim 7 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: determining a third textual representation of a third utterance for performing a third task, wherein the third task is not performed with the content displayed on the display of the electronic device (e.g. paragraph 0040, voice command data structure identifying voice enabled applications and supported commands along with associated voice command variations and examples; paragraph 0052, Fig. 3, assistant determining which tasks the particular application is capable of performing, including example voice commands along with voice examples illustrating how the user can use a given voice command as shown at 340; controlling which voice commands are displayed, which variations etc.; paragraph 0067, at 630, available tasks capable of being performed by voice-enabled application are identified; paragraph 0067, associated voice command variations and voice command examples; as shown in Fig. 3 at 324/340, the determined list of tasks for the particular application includes at three different tasks along with three different corresponding commands/utterances for performing the respective tasks, such as a first task to add a movie to a queue in the application along with a corresponding command, a second task to play a movie in the application along with a corresponding command, and a third task to search for a movie in the application, where these tasks are performed using content of the corresponding application, which is not currently displayed on the screen). With respect to claim 12, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein determining, based on the context included in the request, the textual representation of an utterance for performing the task further comprises: determining an application for performing the task by identifying an application included in the request (e.g. paragraph 0049, user is trying to discover which tasks a particular application can perform; paragraph 0051, user providing voice input as shown at 325 of Fig. 3; user voice request in this example is “Which tasks can <application> perform?” as depicted at 330; paragraph 0066, at 620, determining user voice request for available tasks capable of being performed by specific voice-enabled application; paragraph 0067, identifying tasks capable of being performed by the specific application). With respect to claim 13, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: providing a plurality of textual representations of utterances in the affordance displayed over the user interface on the display of the electronic device (e.g. as shown in Fig. 3 at 324/340, the determined list of tasks for the particular application, displayed in the user interface, includes a plurality of textual representations of utterances/voice commands for the application). With respect to claim 14, Kannan in view of Trufinescu teaches all of the limitations of claim 13 as previously discussed, and Kannan further teaches wherein an order of the plurality of textual representations of utterances is based on a relation of the plurality of textual representations of utterances to content displayed on the display of the electronic device (e.g. as shown in Fig. 3 at 324/340, the determined list of tasks for the particular application, displayed in the user interface, includes a plurality of textual representations of utterances/voice commands for the application (i.e. “try [utterance for triggering the assistant to perform the task]”), where these textual representations of utterances/voice commands are displayed grouped with corresponding descriptions of the tasks to be performed (i.e. such as add a movie to my queue, play a movie, and search for a movie, each of which are content displayed on the screen), such that the utterances are displayed in an order on the screen based on the display positions for the corresponding descriptions of the corresponding tasks). With respect to claim 18, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: receiving a user input including the utterance; and in response to receiving the user input including the utterance, performing the task (e.g. paragraph 0056, Fig. 4, as depicted at 440, user provides voice input for voice command; in response the assistant determines the voice command, compares to available commands, and then presents results of the command to the user as shown at 445). With respect to claim 19, Kannan in view of Trufinescu teaches all of the limitations of claim 18 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: receiving a second user input including the request for potential tasks executable by the digital assistant in the context of the electronic device (e.g. paragraph 0028, extensible digital personal assistant not limited to operating system features/services and is extended to support third party voice enabled applications; paragraph 0041, range of tasks that can be performed on behalf of user by digital assistant extended to include commands for performing available tasks of third party voice enabled applications; paragraph 0049, user is trying to discover which tasks a particular application can perform; paragraph 0051, user providing voice input as shown at 325 of Fig. 3; user voice request in this example is “Which tasks can <application> perform?” as depicted at 330; paragraph 0065, Fig. 6, at 610 a digital voice input is received by a voice-controlled digital personal assistant; paragraph 0066, at 620, determining user voice request for available tasks capable of being performed by voice-enabled application; i.e. receiving a request for potential tasks which the assistant can perform, including tasks which the assistant can perform via another application, where the request additionally includes contextual information identifying a particular application which the user is requesting commands for; i.e. as shown in Fig. 2 at 240, multiple different applications are included on the device (i.e. Netflix, Hulu Plus, Amazon Instant Video), such that the user can provide a different/second request as shown in Fig. 3 regarding which tasks are performable in a different application from that specified in the first request, or can provide another request regarding the same application at a subsequent time; in addition, the context may be a context in which a user is asking about available functionalities in third party applications in general); in response to receiving the request for potential tasks executable by the digital assistant in the context of the electronic device: determining, based on the context included in the request, a textual representation of an utterance other than the received utterance (e.g. paragraph 0040, voice command data structure identifying voice enabled applications and supported commands along with associated voice command variations and examples; paragraph 0052, Fig. 3, assistant determining which tasks the particular application is capable of performing, including example voice commands along with voice examples illustrating how the user can use a given voice command as shown at 340; controlling which voice commands are displayed, which variations etc.; paragraph 0067, at 630, available tasks capable of being performed by voice-enabled application are identified; paragraph 0067, associated voice command variations and voice command examples; i.e. determining which voice commands and variations are available for the particular application (as provided with the request as contextual information) and how the commands/variations should be presented in textual form on a user interface; where determined voice commands/variations are those which apply to the second/different application and are therefore different/other than the received utterance corresponding to a command for the first application); and providing the textual representation of the utterance other than the received utterance (e.g. paragraph 0052, Fig. 3, third graphical user interface displayed listing tasks at 340 including example voice commands/variations; paragraph 0068, at 640, response provided to the user via GUI identifying available tasks, such as displayed list of tasks along with associated voice command variations and voice command examples; where determined and displayed voice commands/variations are those which apply to the second/different application and are therefore different/other than the received utterance corresponding to a command for the first application). With respect to claim 20, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: detecting a user input selecting the utterance; and in response to detecting the user input selecting the utterance performing the task (e.g. paragraph 0056, Fig. 4, as depicted at 440, user provides voice input for voice command; in response the assistant determines the voice command, compares to available commands, and then presents results of the command to the user as shown at 445). With respect to claim 21, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed, and Kannan further teaches wherein the one or more programs further include instructions for: after performing the task, displaying a result of the task on the display of the electronic device (e.g. paragraph 0056, Fig. 4, as depicted at 440, user provides voice input for voice command; in response the assistant determines the voice command, compares to available commands, and then presents results of the command to the user as shown at 445). Claims 1, 2-6, 8-10, 16, 17, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kannan in view of Trufinescu, further in view of Novitchenko et al. (US 20200380973 A1). With respect to claims 1, 22, and 23, Kannan in view of Trufinescu teaches the limitations of the claims as previously discussed. Assuming arguendo that Kannan does not explicitly disclose that the textual representation of the utterance is provided in an affordance displayed over a user interface on a screen of the electronic device, Novitchenko teaches that the textual representation of the utterance is provided in an affordance displayed over a user interface on a screen of the electronic device (e.g. paragraph 0287, Fig. 9A, suggestion affordance 912 overlaying the interface). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to display the textual representations of the utterances in an affordance which is displayed over a user interface (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 2, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed. Kannan does not explicitly disclose wherein determining, based on the context included in the request, the textual representation of the utterance for performing the task further comprises: selecting the utterance based on a usage history associated with a user providing the user input and the digital assistant. However, Novitchenko teaches wherein determining, based on the context included in the request, the textual representation of the utterance for performing the task further comprises: selecting the utterance based on a usage history associated with a user providing the user input and the digital assistant (e.g. paragraph 0243, determining based on context data a task that may be performed by a digital assistant; determining whether suggestion criteria associated with the determined task are satisfied; if suggestion criteria satisfied, device providing suggestion indicating that the determined task may be performed using the digital assistant; paragraph 0248, context data including previous state of the device, such as user inputs that the device previously received, previous tasks performed on the device, whether the tasks previously performed were performed using a digital assistant of the electronic device, suggestions previously provided, user inputs previously received in response to suggestions, etc.; paragraph 0254, determining based on the context data a task that may be performed by digital assistant in response to natural language expression; paragraph 0269, determining whether suggestion criteria satisfied based on context data associated with user of the electronic device; personalizing suggestions based on users current and past usage of the electronic device; paragraph 0272, providing suggestion indicating that determined task may be performed using the digital assistant; paragraph 0274, suggestion includes indication as to how user can use the digital assistant to perform the digital task, such as a natural-language expression including trigger that, when provided to the device, causes the electronic device to initiate dialog with the digital assistant; i.e. where selection of the suggested task includes selection of the expression associated with the suggested task, and this selection is based on user’s usage history). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to select the task and associated utterance for display to the user based on usage history associated with the user and the digital assistant (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 3, Kannan in view of Trufinescu, further in view of Novitchenko teaches all of the limitations of claim 2 as previously discussed, and Novitchenko further teaches wherein the utterance is selected based on a number of times the utterance is included in the usage history associated with the user and the digital assistant (e.g. paragraph 0249-0251, context data including number of time the user has previously performed particular task using the digital assistant; number of times the device has provided suggestion indicating particular task may be performed by digital assistant to the user; user inputs in previously received in response to suggestion (including via speech input/utterance); paragraph 0264, suggestion criteria include requirement that the determined task has not been performed on the electronic device using digital assistant; task never performed using digital assistant; i.e. where the task is performed using the digital assistant in response to a corresponding speech/utterance input by the user, the context data indicating a number of times the task has been invoked via the assistant by the user includes/indicates a number of times which the corresponding speech/utterance has been provided in the usage history of the device, and this number of task invocations/utterances is utilized as context data for selecting the task as a suggestion, and as a suggestion criteria for determining whether to present the task suggestion to the user). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to select the task and associated utterance for display to the user based on usage history, including context/suggestion criteria information indicating a number of times the task was invoked by the user using the associated speech/utterance, associated with the user and the digital assistant (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 4, Kannan in view of Trufinescu, further in view of Novitchenko teaches all of the limitations of claim 3 as previously discussed, and Novitchenko further teaches wherein the utterance is selected when the number of times the utterance is included in the usage history is zero (e.g. paragraph 0249-0251, context data including number of time the user has previously performed particular task using the digital assistant; number of times the device has provided suggestion indicating particular task may be performed by digital assistant to the user; user inputs in previously received in response to suggestion (including via speech input/utterance); paragraph 0264, suggestion criteria include requirement that the determined task has not been performed on the electronic device using digital assistant; task never performed using digital assistant; i.e. where the task is performed using the digital assistant in response to a corresponding speech/utterance input by the user, the context data indicating a number of times the task has been invoked via the assistant by the user includes/indicates a number of times which the corresponding speech/utterance has been provided in the usage history of the device, and this number of task invocations/utterances is utilized as context data for selecting the task as a suggestion, and as a suggestion criteria for determining whether to present the task suggestion to the user; it is noted that context/suggestion criteria indicating that the task has never been invoked (and the corresponding utterance therefore has never been used) indicates that the utterance has been included in the usage history zero times). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to select the task and associated utterance for display to the user based on usage history, including context/suggestion criteria information indicating a number of times the task was invoked by the user using the associated speech/utterance, including never/zero times, associated with the user and the digital assistant (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 5, Kannan in view of Trufinescu teaches all of the limitations of claim 1 as previously discussed. Kannan does not explicitly disclose wherein determining, based on the context included in the request, the textual representation of an utterance for performing the task further comprises: determining content displayed on the screen of the electronic device; and determining the textual representation of the utterance for performing the task based on the content displayed on the screen of the electronic device. However, Novitchenko teaches wherein determining, based on the context included in the request, the textual representation of the utterance for performing the task further comprises: determining the content displayed on the display of the electronic device; and determining the textual representation of the utterance for performing the task based on the content displayed on the display of the electronic device (e.g. paragraph 0243, determining based on context data a task that may be performed by a digital assistant; determining whether suggestion criteria associated with the determined task are satisfied; if suggestion criteria satisfied, device providing suggestion indicating that the determined task may be performed using the digital assistant; paragraph 0245, context data indicating application currently open on the device; paragraph 0254, determining task based on context data; context data indicating that a clock application is open on the electronic device, determining a task based on the clock application; paragraph 0258, determining search screen interface is open on the electronic device and receives text input “clock” in search field; determining user is attempting to search for and open clock application; determining task of setting an alarm or timer in the clock application; paragraph 0272, providing suggestion indicating that determined task may be performed using the digital assistant; paragraph 0274, suggestion includes indication as to how user can use the digital assistant to perform the digital task, such as a natural-language expression including trigger that, when provided to the device, causes the electronic device to initiate dialog with the digital assistant; i.e. where the determined context data includes content displayed on the screen such as of a currently opened application or currently entered search input text, and the task is determined based on this context data, including an associated natural language expression/utterance, and this information is displayed to the user as a suggestion). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to select the task and associated utterance for display to the user based on content currently displayed on the screen (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). As previously discussed, Trufinescu teaches that the content is content that is referenced by the request (e.g. paragraph 0024, Fig. 3, user clicking on graphical selector for MOVIE1, and session state of application user interface changed to present specific content for MOVIE1; paragraph 0030, Fig. 3, query 48 from user includes word or phrase and the bot client program is unable to resolve the meaning of the word or phrase within threshold confidence level, and in response, sending context request to application program; example query 48A includes the phrase “When was this movie made?”; paragraph 0032, determining whether query is directed to content related to the session state of the application program; paragraph 0033, sending context request to application program; application program sending current context data, which may include current content 38 presented via the application user interface of the application program; paragraph 0034, Fig. 3, query is “When was this movie made?” and context data includes content for the Movie 1 that is currently being presented by the application user interface). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Novitchenko, and Trufinescu in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization), to incorporate the teachings of Trufinescu (directed to application context aware chatbots) to include the capability to include, in the context, content displayed on a display of the electronic device that is referenced by the request (as taught by Trufinescu). One of ordinary skill would have been motivated to perform such a modification in order to enable contextual information about application programs to be communicated to and leveraged by chatbots to process user queries with greater effectiveness and the functionality of the application program and chatbot are extended as described in Trufinescu (paragraph 0010). With respect to claim 6, Kannan in view of Trufinescu, further in view of Novitchenko teaches all of the limitations of claim 1 as previously discussed, and Novitchenko further teaches wherein the content displayed on the display of the electronic device includes an application (e.g. paragraph 0245, context data indicating application currently open on the device; paragraph 0254, determining task based on context data; context data indicating that a clock application is open on the electronic device, determining a task based on the clock application; paragraph 0258, determining search screen interface is open on the electronic device and receives text input “clock” in search field; determining user is attempting to search for and open clock application; determining task of setting an alarm or timer in the clock application). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to select the task and associated utterance for display to the user based on content, including an application, currently displayed on the screen (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 8, Kannan in view of Trufinescu teaches all of the limitations of claim 7 as previously discussed. Kannan does not explicitly disclose wherein the first task and the second task are both performed with the content displayed on the display of the electronic device. However, Novitchenko teaches wherein the first task and the second task are both performed with the content displayed on the display of the electronic device (e.g. paragraphs 0254 and 0258, providing multiple different suggested tasks (with associated natural language phrases/utterances) which may be performed with the currently open/displayed application, such as a task to set an alarm and another task to set a timer within a currently open/displayed clock application). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to provide, as the first and second tasks (i.e. of Kannan, such as first and second tasks and associated utterances for performing the tasks, which are performable within an application using a digital assistant) first and second tasks which are performed using content, such as the application, which is currently open and displayed on the device (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 9, Kannan in view of Trufinescu teaches all of the limitations of claim 7 as previously discussed. Kannan does not explicitly disclose wherein the second textual representation of the second utterance for performing the second task is further determined based on other contextual data. However, Novitchenko teaches wherein the second textual representation of the second utterance for performing the second task is further determined based on other contextual data (e.g. paragraph 0245-0253, context data including context data based on current state of device and context data based on previous state of device; context data indicating software application currently open; context data indicating current value or setting of electronic device such as wireless communication protocol/setting, device mode of the device, etc.; context data including physical state of the device such as current speed, acceleration, directional movement, location, orientation, temperature, or signal strength of the device; context data indicating previous user inputs, previous tasks performed, whether tasks performed with assistant, previous suggestions, previous responses to suggestions; user specific context data; context data indicating previous tasks, numbers of times performed, whether digital assistant used, etc.; number of times suggestion of task provided, responses, user inputs, etc. in response to suggestions, etc.; context data including frequency of user behaviors, context data associated with additional devices associated with the user; i.e. the device may provide multiple different suggested tasks (including corresponding utterances), and the tasks (and corresponding utterances) may be determined based on various different contextual data). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to provide the first and second tasks based on various different contextual data (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 10, Kannan in view of Trufinescu, further in view of Novitchenko teaches all of the limitations of claim 9 as previously discussed, and Novitchenko further teaches wherein the other contextual data includes data recently accessed on the electronic device (e.g. paragraph 0207, contextual information including software and hardware states of device at the time user request is received; paragraph 0245, currently open application; paragraphs 0249-0250, context data indicating number of times user has performed task within time period/amount of time elapsed since user lasted used assistant to perform the task, such as within the previous week, month, etc.; paragraph 0253, context data indicating previous interactions with digital assistant of additional devices within predetermined period such as a week, month, etc.; paragraph 0254, context data indicating clock application currently open; paragraph 0258, context data indicating search screen interface is open, text input “clock” is in search field; i.e. where the user’s interaction with the device, such as via an application or digital assistant to perform a task, indicates data accessed on the device (i.e. such as the application, assistant, or other data associated with the task) within a particular time period, including a time period which may be considered “recent” (such as within the past week, month, etc.)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability to utilize, as context data, indications of data (associated with/including a program, task, assistant function, etc.) recently (within past week, month, etc.) accessed on the device (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 16, Kannan in view of Trufinescu teaches all of the limitations of claim 13 as previously discussed. Kannan does not explicitly disclose wherein the one or more programs further include instructions for: detecting selection of a user interface object; and in response to detecting selection of the user interface object, displaying a second plurality of textual representations of utterances in the affordance. However, Novitchenko teaches wherein the one or more programs further include instructions for: detecting selection of a user interface object; and in response to detecting selection of the user interface object, displaying a second plurality of textual representations of utterances in the affordance (e.g. paragraph 0289, Fig. 9A, suggestion affordance 912 displayed in user interface is a selectable affordance; selecting suggestion affordance 912 may cause suggestion/tips software application to be opened to display an interface in which suggestions that have been previously provided by the device can be collected, categorized, and displayed). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability for the user to select an affordance (including an affordance in which the suggestion/utterance is presented) and, in response, display a plurality of previously provided suggestions (where these suggestions include corresponding textual representations of utterances) (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). With respect to claim 17, Kannan in view of Trufinescu, further in view of Novitchenko teaches all of the limitations of claim 16 as previously discussed, and Novitchenko further teaches wherein the second plurality of textual representations of utterances are unrelated to the content displayed on the display of the electronic device (e.g. paragraph 0289, Fig. 9A, suggestion affordance 912 displayed in user interface is a selectable affordance; selecting suggestion affordance 912 may cause suggestion/tips software application to be opened to display an interface in which suggestions that have been previously provided by the device can be collected, categorized, and displayed; i.e. where at least some of the displayed previously provided suggestions/tips (including their associated utterances) may be related to various different applications (as shown in at least Figs. 9A-D), and therefore unrelated to at least some other content displayed on the screen). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Novitchenko in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Novitchenko (directed to voice assistant discoverability through on-device targeting and personalization) to include the capability for the user to select an affordance (including an affordance in which the suggestion/utterance is presented) and, in response, display a plurality of previously provided suggestions (where these suggestions include corresponding textual representations of utterances), where these suggestions/utterances may be related to various different applications and therefore may be unrelated to at least some content which is currently displayed on the screen (such as content of a current application) (as taught by Novitchenko). One of ordinary skill would have been motivated to perform such a modification in order to improve an electronic device’s ability to provide a suggestion indicating a task that may be performed by a digital assistant of the electronic device, and to increase the relevancy and usefulness of the suggestion and thus increase the likelihood that the user will engage with an learn from the suggestion, making the device more efficient, reducing power usage and improving battery life by enabling the user to user the device to perform tasks more quickly and efficiently as described in Novitchenko (paragraph 0010). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kannan in view of Trufinescu, further in view of Gray (US 9990176 B1). With respect to claim 15, Kannan in view of Trufinescu teaches all of the limitations of claim 13 as previously discussed. Kannan does not explicitly disclose wherein an order of the plurality of textual representations of utterances is based on a number of times each of the plurality of textual representations of utterances was previously received by the digital assistant. However, Gray teaches wherein an order of the plurality of textual representations of utterances is based on a number of times each of the plurality of textual representations of utterances was previously received by the digital assistant (e.g. col. 36 lines 49-64, Fig. 5, frequent utterances module storing data corresponding to each utterance and arranging by which utterances were made most often; i.e. as shown in Fig. 5, the set of utterances may be ordered (such as in a numerical list) by frequency/number of times the utterance is received). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kannan, Trufinescu, and Gray in front of him to have modified the teachings of Kannan (directed to discovering capabilities of third-party voice-enabled resources) and Trufinescu (directed to application context aware chatbots), to incorporate the teachings of Gray (directed to latency reduction for content playback, such as by an assistant in response to an utterance) to include the capability to order the textual representations of the utterances based on number of times they are historically received by the assistant (as taught by Gray). One of ordinary skill would have been motivated to perform such a modification in order to reduce amount of bandwidth consumed and provide responses to spoken utterances in a substantially instantaneous manner as described in Gray (abstract). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Oct 06, 2022
Application Filed
Mar 11, 2024
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Examiner Interview Summary
Oct 30, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103
Apr 06, 2026
Examiner Interview Summary
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Request for Continued Examination
Apr 16, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month