Prosecution Insights
Last updated: April 19, 2026
Application No. 18/476,380

SYSTEMS AND METHODS FOR ACTION SUGGESTIONS

Final Rejection §103
Filed
Sep 28, 2023
Examiner
HASSAN, ALI MOHAMAD
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Yahoo Assets LLC
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+8.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
29
Total Applications
across all art units

Statute-Specific Performance

§101
30.8%
-9.2% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment and Arguments. Applicant’s arguments, see page 13, filed 10/15/2025, with respect to claims 1-20 rejection have been fully considered and are persuasive. The 101 rejection of claims 1- 20 has been withdrawn. Applicant’s arguments with respect to claim(s) 1, 9, and 17 have been considered but are not persuasive. Applicant states that “At no point does Hattangady teach using an LLM to analyze messages to identify potential actions that should be taken on behalf of the user, nor does it teach generating descriptions of such actions for user approval.”. However, Hattangady does teach analyzing messages for a potential action using a LLM (col 15 lines 9-13, Col 20 lines 55-60, Col 20 lines 22-26, col 2 line 39- 52, col 10 lines 36-41, and/or Col 26 Lines 57-60), this shows that Hattangady analyzes messages to generate a reply using a LLM. Specifically, col 2 line 39- 52 “Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message.", col 6 lines 15-27 “In some examples, the message selection causes the message generator 110 to perform a multi-turn process with the generative AI model 108 to generate a suggested draft reply 233 to the selected message 222. For instance, data communication 210 corresponds to communications between the messaging application 112 and the preprocessor 202 of the message generator 110 in a first turn of the multi-turn process. In the first turn, the preprocessor 202 receives an indication of the message selection and extracts data from the selected message 222. According to an example implementation, the extracted data includes at least a portion of the body of the message 222.” and Col 20 lines 55-60 "For instance, the generative AI model 108 analyzes the query and uses information included in the context object to understand the context of the prompt. The generative AI model 108 further generates text output in response to the query and provides the response to the message generator 110." Shows that the LLM is analyzing the messages than generating a response for the user. Where the potential action is being interpreted as potential reply and the description is the content of the reply. Further the applicant argues that Hattangady does not teach “executing actions on behalf of a user based on analyzing incoming messages” however, Hattangady does execute on behalf of the user (col 17 lines 7-17, col 2 line 39- 52) where the user gets the suggested reply and he confirms and/or can send it to the recipient. Hence, the applicants’ argument for claims 1, 9 and 17 are not persuasive and rejection is still maintained. Therefore, applicants’ argument for claim 19 is not persuasive and rejection is till maintained. Applicant’s arguments with respect to claim 1,9,and 17 in regards for limitations “receiving, by the processor, user input related to the potential action via a chat interface, wherein the user input comprises a modification of the potential action; performing, by the processor, on behalf of the user, a subsequent action conforming to the user input wherein performing the subsequent action comprises performing the new potential action via a large language model large language model agent that interoperates with the large language model.” have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1,2, 3, 4, 5 8, 9, 10, 11, 12, 13, 16, 17, 18, 19, and 20 are rejected under 35 U.S.C. 103 as obvious over US Patent US 20170193349 A1, (Jothilingam; Raghu.) in view of US Patent US 11962546 B1, (Hattangady; Poonam Ganesh) in further view of US Patent US 20220385703 A1, (Joshi; Sachindra.) Claim 1,9,17 Regarding Claim 1,9,17, Raghu teaches 1. A method comprising: identifying, by a processor, a plurality of electronic messages addressed to a user; (Paragraph 29 "FIG. 1 illustrates an example environment 100 in which example processes involving task extraction, operations, and management as described herein can operate. In some examples, the various devices and/or components of environment 100 include a variety of computing devices 102. By way of example and not limitation, computing devices 102 may include devices 102a-102e. Although illustrated as a diverse variety of device types, computing devices 102 can be other device types and are not limited to the illustrated device types. Computing devices 102 can comprise any type of device with one or multiple processors 104 operably connected to an input/output interface 106 and computer-readable media 108, e.g., via a bus 110. Computing devices 102 can include personal computers such as, for example, desktop computers 102a, laptop computers 102b, tablet computers 102c, telecommunication devices 102d, personal digital assistants (PDAs) 102e, electronic book readers, wearable computers (e.g., smart watches, personal health tracking accessories, etc.), automotive computers, gaming devices, etc. Computing devices 102 can also include, for example, server computers, thin clients, terminals, and/or work stations. In some examples, computing devices 102 can include components for integration in a computing device, appliances, or other sorts of devices." Paragraph 43 “In some examples, techniques for extraction may involve a hierarchy of analysis, including using a sentence-centric approach, consideration of multiple sentences in a message, and global analyses of relatively long communication threads. In some implementations, such relatively long communication threads may include sets of messages over a period of time, and sets of threads and longer-term communications (e.g., spanning days, weeks, months, or years). Multiple sources of content associated with particular communications may be considered. Such sources may include histories and/or relationships of/among people associated with the particular communications, locations of the people during a period of time, calendar information of the people, and multiple aspects of organizations and details of organizational structure associated with the people.” Paragraph 49 " FIG. 3 is a block diagram illustrating an electronic communication 302 that includes an example text thread and a task extraction process 304 of a task. For example, communication 302, which may be a text message to a user received on a computing device of the user from another user, includes text 306 from the other user. Task extraction process 304 includes analyzing content (e.g., text 306) of communication 302 and determining a task. In the example illustrated in FIG. 3, text 306 by the other user includes a task 308 that the user writes a presentation for a meeting on May 9.sup.th. Task extraction process 304 may determine the task by any of a number of techniques involving analyzing text 306. In some implementations, if the text is insufficient for determining a task (e.g., “missing” information or highly uncertain information), then task extraction process 304 may query any of a number of data sources. For example, if text 306 did not include the date of the meeting (e.g., the other user may assume that the user remembers the date), then task extraction process 304 may query a calendar of the user or the other user for the meeting date.") extracting, from the plurality of electronic messages, at least one of an action, a subject, and a keyword associated with a potential action: (Fig 2 shows extracting from a communication a task (208), further a task parameter extraction for a action, subject, and keyword Fig 3 shows an example of extracting a task, action, subject, keyword. Paragraph 17 "… For example, the system may examine other messages exchanged by one or both of the authors of the email exchange or by other people. The system may also examine larger corpora of email and other messages….") performing, by the processor, on behalf of the user, a subsequent action conforming to the user input wherein performing the subsequent action (Paragraph 79 "At block 1008, task operations module 402 may provide a list of the task-oriented actions to the user for inspection or review. For example, a task-oriented action may be to find or locate digital artefacts (e.g., documents) related to a particular task to support completion of, or user comprehension of, a task activity. At diamond 1010, the user may select among choices of different possible actions to be performed by task operations module 402, refine possible actions, delete actions, manually add actions, and so on. If there are any such changes, then process 1000 may return to block 1004 where task operations module 402 may re-generate task-oriented processes in view of the user's edits of the task-oriented process list. On the other hand, if the user approves the list, then process 1000 may proceed to block 1012 where task operations module 402 performs the task-oriented processes. At block 1014, the task operations module may generate and display a visual cue and productivity report, for example.") Raghu do not explicitly teach all of analyzing, by a large language model executed by the processor, the plurality of electronic messages and the extracted at least one of the action, subject, and keyword to identify potential actions; suggesting, by the processor, to the user, the generated description of a potential action identified by the large language model by displaying the description in a user interface; receiving, by the processor, user input related to the potential action via a chat interface, wherein the user input comprises a modification of the potential action; generating, using the large language model, a new potential action based on the modification: and performing, by the processor, on behalf of the user, a subsequent action conforming to the user input wherein performing the subsequent action comprises performing the new potential action via a large language model large language model agent that interoperates with the large language model. However, Hattangady. teach analyzing, by a large language model executed by the processor, the plurality of electronic messages and the extracted at least one of the action, subject, and keyword to identify potential actions; (Col 15 lines 9-13 "At operation 410, a first output from the generative AI model 108 is received. For instance, the generative AI model 108 analyzes the first text query and uses information included in the first context object to understand the context of the first prompt." Col 20 lines 55-60 "For instance, the generative AI model 108 analyzes the query and uses information included in the context object to understand the context of the prompt. The generative AI model 108 further generates text output in response to the query and provides the response to the message generator 110." Col 20 lines 22-26 "The second text prompt is represented as data communication 240 in FIG. 2 as a communication between the query interface 204 and the generative AI model 108. For instance, the generative AI model 108 analyzes the second text prompt to generate a relevant response." col 2 line 39- 52 "Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message." Col3 lines 11-27 "The example system 100 generates a suggested message using a generative AI model 108, which may be an LLM. According to an aspect, the system 100 includes a computing device 102 that may take a variety of forms, including, for example, desktop computers, laptops, tablets, smart phones, wearable devices, gaming devices/platforms, virtualized reality devices/platforms (e.g., virtual reality (VR), augmented reality (AR), mixed reality (MR)), etc. The computing device 102 has an operating system that provides a graphical user interface (GUI) that allows users to interact with the computing device 102 via graphical elements, such as application windows (e.g., display areas), buttons, icons, and the like. For example, the graphical elements are displayed on a display screen 104 of the computing device 102 and can be selected and manipulated via user inputs received via a variety of input device types (e.g., keyboard, mouse, stylus, touch, spoken commands, gesture)." Col9 lines 60-67 and col10 lines 1-21 "According to examples, the preprocessor 202 further generates a request phrase for the second prompt and combines the generated request phrase with the second context object. In some examples, the generated request phrase includes a phrase or action to generate a reply. In further examples, the generated request phrase includes a reference to or a description about the sender of the message 222 and/or recipient(s) of the message 222. In additional examples, the generated request phrase includes a length limitation for the suggested draft reply 233 (e.g., no more than 5 sentences, at least 3 paragraphs). In still further examples, the generated request phrase includes additional instructions, where the additional instructions include context inferred by the extracted message data and/or additional context. For instance, inferred content can include how verbose, polite, respectful, the user typically is when replying to communications. An example generated request includes: “I am emailing a close friend. Write a verbose email in more than 10 sentences covering the following outline. Be cheeky and charming.” In an example implementation, the preprocessor 202 further combines the second context object with the second prompt to generate a second text prompt as input for the generative AI model 108. In some examples, the second prompt is prepended to the second context object. For instance, the resultant second text prompt may be in the form of ““I am emailing a close friend. Write a verbose email in more than 10 sentences covering the following outline. Be cheeky and charming.”+second_context_Object.”" potential action is being interpreted as potential reply) suggesting, by the processor, to the user, the generated description of a potential action identified by the large language model by displaying the description in a user interface; (Col 10 lines 36-41 " For instance, in the second turn of the multi-turn process with the generative AI model 108, the message generator 110 generates a suggested draft reply 233 to the selected message 222 based on a user-selection of a shortened summary 224 generated in the first turn of the process." Col 26 Lines 57-60 "from the generative AI model, an output including the draft reply; and surfacing the draft reply in a user interface." col 2 line 39- 52 "Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message." potential action is being interpreted as potential reply and the description is the content of the reply) receiving, by the processor, user input related to the potential action via a chat interface, wherein the user input comprises a modification of the potential action; (col 2 line 39- 52 "Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message." col 11 -12 lines 59-3 " In some examples, the user may provide a prompt input via a selected option. In some examples, a prompt UI field is provided in the application UI 106 via which the user can provide the prompt input. For instance, the user may type, speak, or otherwise input a phrase or individual keywords in association with a statement, question, instructions, or other request for editing the suggested draft reply 233. As an example, the user may type or utter a phrase such as, “Make this sound like a child wrote it”, “Add a story”, or “Make this funnier”, which is received as the prompt input and included in the subsequent prompt and query." The user interface will be interoperated as a chat interface since it has a section for messaging with the machine) generating, using the large language model, a new potential action based on the modification: and (Col 13 lines 21-43 " According to examples, one or more customization options 316 are provided in the application UI 106 that allow the user to select between various options to reframe the prompt provided to the generative AI model 108, so that a next-generated suggested draft reply 233 will better match the user's intent, sentiment, etc. In some examples, the customization options 316 include various tone of voice options. Some non-limiting example tone of voice customization options 316 are depicted in FIG. 3E. For instance, example voice customization options 316 include a “serious” tone, an “excited” tone, a “cheeky” tone, a “congratulatory” tone, a “celebratory” tone, and other options. In further examples, the customization options 316 include various length options. Some non-limiting example length customization options 316 are depicted in FIG. 3F. For instance, example length customization options 316 include “short”, “medium”, and “long”. In still further examples, the customization options 316 include a user input option. For instance, selection of the user input option allows the user to provide a customized sentiment input. For instance, the user may type, speak, or otherwise input a phrase or individual keywords in association with a desired sentiment or intent for the reply." Col 16-17 lines 61-6 "At decision operation 430, a determination is made as to whether to perform a subsequent query with the generative AI model 108. For instance, when one or more customization options 316 are selected, another shortened summary 224 is selected, or a custom summary is received, the message generator 110 generates a subsequent prompt for the generative AI model 108 including the selected editing option(s) 324. The method 400 returns to operation 422, where the subsequent prompt is included in a subsequent query provided to the generative AI model 108. For instance, results from the subsequent query are included in a next suggested draft reply 233 that is presented to the user in the application UI 106." ) performing, by the processor, on behalf of the user, a subsequent action conforming to the user input wherein performing the subsequent action comprises performing the new potential action via a large language model (Col 17 lines 7-17 "In some examples, when a selection is made by the user to continue with the displayed suggested draft reply 233, the suggested draft reply 233 is included in a reply message 244 at operation 434. For instance, the content included in the suggested draft reply 233 is inserted into the body 302 of the reply message 244. The user view the reply message 244 or edit the reply message 244 until it correctly matches the user's intent and sentiment. At operation 436, an indication of a selection to send the reply message 244 is received. The reply message 244 is sent to the recipient(s) at operation 438" col 2 line 39- 52 "Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message." potential action is being interpreted as potential reply and the description is the content of the reply) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raghu to incorporate the teachings of Hattangady to provide a “analyzing, by a large language model executed by the processor, the plurality of electronic messages and the extracted at least one of the action, subject, and keyword to identify potential actions; suggesting, by the processor, to the user, the generated description of a potential action identified by the large language model by displaying the description in a user interface; receiving, by the processor, user input related to the potential action via a chat interface, wherein the user input comprises a modification of the potential action; generating, using the large language model, a new potential action based on the modification: and performing, by the processor, on behalf of the user, a subsequent action conforming to the user input wherein performing the subsequent action comprises performing the new potential action via a large language model ” Doing so would make it have a strong understanding of the structure and meaning of the text, which makes it more effective on specific tasks., as recognized by Hattangady. (col 4 line 40-56). However, Raghuin view of Hattangady. do not explicitly teach all of large language model agent that interoperates with the large language model. However, Joshi teaches large language model agent that interoperates with the large language model. (See FIG 4 and 5. Paragraph 4"SCS technologies can be implemented as a customer-service tool in online ecommerce settings where it is desirable to allow online customers visiting a merchant's website to immediately initiate an online conversation/chat with a merchant. In a conventional customer-service SCS session, the customer is one user, and a representative of the merchant can be another user. In some implementations of a customer-service SCS, the merchant representative can be a person, which is also known as a “live-agent.” In some implementations of a customer-service SCS, the merchant representative can be a computer-implemented agent, which is also known as a “conversational agent” (CA) or a “chatbot.” In general, a CA can be defined as a computer system configured to communicate with a human using a coherent structure. CA systems can employ a variety of communication mechanisms, including, for example, text, speech, graphics, haptics, gestures, and the like for communication on both the input and output channels. CA systems also employ various forms of natural language processing (NLP), which is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and humans using language." paragraph 58 "If the answer to the inquiry at decision block 306 is yes, the methodology 300 moves to block 312 where the SCS 100 (and particularly the SGC-CA 130) generates and executes a set of scripts for simulating live-agent actions that can be taken in response to inquiry messages (e.g., user message 412 shown in FIG. 4) in the SCS session. An example of how the script-generating functionality described at block 312 can be implemented using a language model 502 is shown in FIG. 5A. The language model 502 is trained to perform multiple tasks, including advancing the SCS session 410 to the level shown in FIG. 5A by ingesting user message 416 and generating agent message 414, as well as generating scripts 532 that are based on (or conditional to) the information provided in user messages 412, 416. In embodiments of the invention, the language model 502 ingests the user message 412 and evaluates whether or not the user message 412 provides the user inquiry and the information, if any, needed in order to search for and uncover a response to the user inquiry. For example, if the user message 412 reads “I want to know the download speed for my zip code 10954,” this example of the user message 412 includes both the user's inquiry, along with the information needed by the language model 502 to begin a process of responding to the user inquiry. In the example depicted in FIG. 5A, the user message 412 reads “I want to know the download speed in my area” without providing the user's zip code, the language model 502 generates the agent message 414, which solicits the information (user message 416) needed by the language model 502." Paragraph 62 "Returning to the methodology 300, blocks 316, 318 use the messages in the SCS session so far, the scripts, and/or the inquiry response information, in any combination, to generate an inquiry response and incorporate the inquiry response into the SCS session. FIG. 5C depicts two diagrams showing examples of how blocks 316, 318 can be implemented using a language model 560. In the leftmost diagram, the inputs/outputs to the language model 560 are shown in their coded form. In the rightmost diagram, the inputs/outputs to the language model 560 are shown in their non-coded, NL form. As shown in the leftmost diagram, the language model 560 is trained to receive as input sequence(s) (e.g., input sequence(s) 202A shown in FIG. 2), in any combination, the context word sequence 504, the instruction word sequence 506, and the image representation sequence 508. In response to the input sequence(s), the language model 560 generates as output sequence(s) (e.g., output sequence(s) 206A shown in FIG. 2) a response word sequence 510 that represents an agent message 418 (shown in the rightmost diagram) generated by the language model 560 of the SGC-CA 130 (shown in FIG. 1). As shown in the rightmost diagram, the language model is trained to receive as input sequence(s) (e.g., input sequence(s) 202A shown in FIG. 2), in any combination, some or all of the messages in the SCS session 410, the live-agent activity descriptions 540, and the inquiry results information 556A. In response to the input sequence(s), the language model 560 generates as output sequence(s) (e.g., output sequence(s) 206A shown in FIG. 2) the agent message 418 generated by the language model 560 of the SGC-CA 130 (shown in FIG. 1).") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raghu in view of Hattangady to incorporate the teachings of Sachindra to provide a “large language model agent that interoperates with the large language model.” Doing so would make conversations agents be very responsive and faster than live agents, as recognized by Sachindra. (Paragraph 5). Claim 9 Regarding Claim 9, Raghu teaches A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a computer processor, the computer program instructions defining steps of: (Paragraph 31 “In some examples, as shown regarding device 102d, computer-readable media 108 can store instructions executable by the processor(s) 104 including an operating system (OS) 112, a machine learning module 114, an extraction module 116, a task operations module 118, a graphics generator 120, and programs or applications 122 that are loadable and executable by processor(s) 104. The one or more processors 104 may include one or more central processing units (CPUs), graphics processing units (GPUs), video buffer processors, and so on. In some implementations, machine learning module 114 comprises executable code stored in computer-readable media 108 and is executable by processor(s) 104 to collect information, locally or remotely by computing device 102, via input/output 106. The information may be associated with one or more of applications 122. Machine learning module 114 may selectively apply any of a number of machine learning decision models stored in computer-readable media 108 (or, more particularly, stored in machine learning module 114) to apply to input data.”) Similar analysis for the remaining claim limitations is shown in claim 1. Claim 17 Regarding Claim 17, Raghu teaches a storage medium for tangibly storing thereon logic for execution by the processor, the logic comprising instructions for: (Paragraph 37 "In contrast, communication media embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. In various examples, memory 108 is an example of computer storage media storing computer-executable instructions. When executed by processor(s) 104, the computer-executable instructions configure the processor(s) to, among other things, receive a task; extract at least one of an action, a subject, and a keyword from the task; search a history of execution of tasks (e.g., task types) that are similar to the task in a database; and categorize the task based, at least in part, on the history of execution of the similar tasks.") Similar analysis for the remaining claim limitations is shown in claim 1. Claim 2,10, 18 Regarding Claim 2,10,18, Raghu in view of Hattangady, in further view of Sachindra further more Raghu teaches the method of claim 1, wherein performing the subsequent action comprises performing the subsequent action. (Paragraph 79 "At block 1008, task operations module 402 may provide a list of the task-oriented actions to the user for inspection or review. For example, a task-oriented action may be to find or locate digital artefacts (e.g., documents) related to a particular task to support completion of, or user comprehension of, a task activity. At diamond 1010, the user may select among choices of different possible actions to be performed by task operations module 402, refine possible actions, delete actions, manually add actions, and so on. If there are any such changes, then process 1000 may return to block 1004 where task operations module 402 may re-generate task-oriented processes in view of the user's edits of the task-oriented process list. On the other hand, if the user approves the list, then process 1000 may proceed to block 1012 where task operations module 402 performs the task-oriented processes. At block 1014, the task operations module may generate and display a visual cue and productivity report, for example.") Raghu do not explicitly teach all of via a large language model. However, Hattangady. teach the method of claim 1, wherein performing the subsequent action comprises performing the subsequent action via a large language model. (col 2 line 39- 52 "Examples described in this disclosure relate to systems and methods for generating a suggested message through the use of a generative artificial intelligence (AI) model, such as a large language model (LLM). In an example implementation, an electronic-communications productivity application is used to help a user to generate an electronic communication, such as an email, text message, chat message, or the like. Such electronic communications are hereinafter referred to generally as messages and the electronic communications productivity application is hereinafter referred to generally as a messaging application. According to an example, a message generator is provided that generates complex messages from LLMs, such as a suggested draft reply to a selected message."). See claim 1 for rationale. Raghu and Hattangady do not explicitly teach all of large language model agent that interoperates with the large language model. However, Sachindra. Teach large language model agent that interoperates with the large language model. (See FIG 4 and 5. Paragraph 4"SCS technologies can be implemented as a customer-service tool in online ecommerce settings where it is desirable to allow online customers visiting a merchant's website to immediately initiate an online conversation/chat with a merchant. In a conventional customer-service SCS session, the customer is one user, and a representative of the merchant can be another user. In some implementations of a customer-service SCS, the merchant representative can be a person, which is also known as a “live-agent.” In some implementations of a customer-service SCS, the merchant representative can be a computer-implemented agent, which is also known as a “conversational agent” (CA) or a “chatbot.” In general, a CA can be defined as a computer system configured to communicate with a human using a coherent structure. CA systems can employ a variety of communication mechanisms, including, for example, text, speech, graphics, haptics, gestures, and the like for communication on both the input and output channels. CA systems also employ various forms of natural language processing (NLP), which is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and humans using language." paragraph 58 "If the answer to the inquiry at decision block 306 is yes, the methodology 300 moves to block 312 where the SCS 100 (and particularly the SGC-CA 130) generates and executes a set of scripts for simulating live-agent actions that can be taken in response to inquiry messages (e.g., user message 412 shown in FIG. 4) in the SCS session. An example of how the script-generating functionality described at block 312 can be implemented using a language model 502 is shown in FIG. 5A. The language model 502 is trained to perform multiple tasks, including advancing the SCS session 410 to the level shown in FIG. 5A by ingesting user message 416 and generating agent message 414, as well as generating scripts 532 that are based on (or conditional to) the information provided in user messages 412, 416. In embodiments of the invention, the language model 502 ingests the user message 412 and evaluates whether or not the user message 412 provides the user inquiry and the information, if any, needed in order to search for and uncover a response to the user inquiry. For example, if the user message 412 reads “I want to know the download speed for my zip code 10954,” this example of the user message 412 includes both the user's inquiry, along with the information needed by the language model 502 to begin a process of responding to the user inquiry. In the example depicted in FIG. 5A, the user message 412 reads “I want to know the download speed in my area” without providing the user's zip code, the language model 502 generates the agent message 414, which solicits the information (user message 416) needed by the language model 502." Paragraph 62 "Returning to the methodology 300, blocks 316, 318 use the messages in the SCS session so far, the scripts, and/or the inquiry response information, in any combination, to generate an inquiry response and incorporate the inquiry response into the SCS session. FIG. 5C depicts two diagrams showing examples of how blocks 316, 318 can be implemented using a language model 560. In the leftmost diagram, the inputs/outputs to the language model 560 are shown in their coded form. In the rightmost diagram, the inputs/outputs to the language model 560 are shown in their non-coded, NL form. As shown in the leftmost diagram, the language model 560 is trained to receive as input sequence(s) (e.g., input sequence(s) 202A shown in FIG. 2), in any combination, the context word sequence 504, the instruction word sequence 506, and the image representation sequence 508. In response to the input sequence(s), the language model 560 generates as output sequence(s) (e.g., output sequence(s) 206A shown in FIG. 2) a response word sequence 510 that represents an agent message 418 (shown in the rightmost diagram) generated by the language model 560 of the SGC-CA 130 (shown in FIG. 1). As shown in the rightmost diagram, the language model is trained to receive as input sequence(s) (e.g., input sequence(s) 202A shown in FIG. 2), in any combination, some or all of the messages in the SCS session 410, the live-agent activity descriptions 540, and the inquiry results information 556A. In response to the input sequence(s), the language model 560 generates as output sequence(s) (e.g., output sequence(s) 206A shown in FIG. 2) the agent message 418 generated by the language model 560 of the SGC-CA 130 (shown in FIG. 1).") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raghu in view of Hattangady to incorporate the teachings of Sachindra to provide a “large language model agent that interoperates with the large language model.” Doing so can be very responsive and faster than live agents, as recognized by Sachindra. (Paragraph 5). Claim 3,11,19 Regarding Claim 3,11,19, Raghu in view of Hattangady, in further view of Sachindra , furthermore Raghu teaches the method of claim 1, wherein identifying the plurality of electronic messages comprises identifying a thread of messages about a same subject and analyzing messages within the thread in context with other messages within the thread. (Paragraph 17 " Various examples describe techniques and architectures for a system that performs, among other things, collection or extraction of tasks from databases, user accounts, and electronic communications, such as messages between or among one or more users (e.g., a single user may send a message to oneself or to one or more other users). For example, a system may extract a set of tasks from a calendar application associated with one or more users. In another example, an email exchange between two people may include text from a first person sending a request to a second person to perform a task. The email exchange may convey enough information for the system to automatically determine the presence of the request to perform the task. In some implementations, the email exchange does not convey enough information to determine the presence of a task. Whether or not this is the case, the system may query other sources of information that may be related to one or more portions of the email exchange. For example, the system may examine other messages exchanged by one or both of the authors of the email exchange or by other people. The system may also examine larger corpora of email and other messages. Beyond other messages, the system may query a calendar or database of one or both of the authors of the email exchange for additional information. In some implementations, the system may, among other things, query traffic or weather conditions at respective locations of one or both of the authors.") Claim 4,12,20 Regarding Claim 4,12,20, Raghu in view of Hattangady, in further view of Sachindra further more Raghu teaches the method of claim 1, wherein receiving user input related to the potential action comprises receiving a modification of the potential action provided by the user, wherein performing the subsequent action comprises performing the new potential action. (Paragraph 79 "At block 1008, task operations module 402 may provide a list of the task-oriented actions to the user for inspection or review. For example, a task-oriented action may be to find or locate digital artefacts (e.g., documents) related to a particular task to support completion of, or user comprehension of, a task activity. At diamond 1010, the user may select among choices of different possible actions to be performed by task operations module 402, refine possible actions, delete actions, manually add actions, and so on. If there are any such changes, then process 1000 may return to block 1004 where task operations module 402 may re-generate task-oriented processes in view of the user's edits of the task-oriented process list. On the other hand, if the user approves the list, then process 1000 may proceed to block 1012 where task operations module 402 performs the task-oriented processes. At block 1014, the task operations module may generate and display a visual cue and productivity report, for example.") Raghu do not explicitly teach all of via a chat interface and generating a new potential action using the large language model However, Hattangady. teach via a chat interface and generating a new potential action using the large language model (Col lines 11- 27 "The example system 100 generates a suggested message using a generative AI model 108, which may be an LLM. According to an aspect, the system 100 includes a computing device 102 that may take a variety of forms, including, for example, desktop computers, laptops, tablets, smart phones, wearable devices, gaming devices/platforms, virtualized reality devices/platforms (e.g., virtual reality (VR), augmented reality (AR), mixed reality (MR)), etc. The computing device 102 has an operating system that provides a graphical user interface (GUI) that allows users to interact with the computing device 102 via graphical elements, such as application windows (e.g., display areas), buttons, icons, and the like. For example, the graphical elements are displayed on a display screen 104 of the computing device 102 and can be selected and manipulated via user inputs received via a variety of input device types (e.g., keyboard, mouse, stylus, touch, spoken commands, gesture).") See claim 1 for rationale. Claim 5 and 13 Regarding Claim 5 and 13, Raghu in view of Hattangady, in further view of Sachindra , furthermore Raghu teaches the method of claim 1, wherein performing the subsequent action comprises creating a digital calendar event on behalf of the user. (Paragraph 22 "Once identified and extracted by a computing system, a task (e.g., the proposal or affirmation of a commitment or request) of a communication may be further processed or analyzed to identify or infer semantics of the commitment or request including: identifying the primary owners of the request or commitment (e.g., if not the parties in the communication); the nature (e.g., type) of the task and its properties (e.g., its description or summarization); specified or inferred pertinent dates (e.g., deadlines for completing the commitment or request); relevant responses such as initial replies or follow-up messages and their expected timing (e.g., per expectations of courtesy or around efficient communications for task completion among people or per an organization); and information resources to be used to satisfy the request. Such information resources, for example, may provide information about time, people, locations, and so on. The identified task and inferences about the task may be used to drive automatic (e.g., computer generated) services such as reminders, revisions (e.g., and displays) of to-do lists, prioritization of tasks, appointments, meeting requests, and other time management activities. In some examples, such automatic services may be applied during the composition of a message (e.g., typing an email or text), reading the message, or at other times, such as during offline processing of email on a server or client device. The initial extraction and inferences about a task may also invoke automated services that work with one or more participants to confirm or refine current understandings or inferences about the task and the status of the task based, at least in part, on the identification of missing information or of uncertainties about one or more properties detected or inferred from the communication.") Claim 8 and 16 Regarding Claim 8, 16, Raghu in view of Hattangady, in further view of Sachindra further more Raghu teaches the method of claim 1, wherein suggesting the potential action to the user comprises generating, a description of the potential action and displaying the description in a user interface. (Paragraph 79 "At block 1008, task operations module 402 may provide a list of the task-oriented actions to the user for inspection or review. For example, a task-oriented action may be to find or locate digital artefacts (e.g., documents) related to a particular task to support completion of, or user comprehension of, a task activity. At diamond 1010, the user may select among choices of different possible actions to be performed by task operations module 402, refine possible actions, delete actions, manually add actions, and so on. If there are any such changes, then process 1000 may return to block 1004 where task operations module 402 may re-generate task-oriented processes in view of the user's edits of the task-oriented process list. On the other hand, if the user approves the list, then process 1000 may proceed to block 1012 where task operations module 402 performs the task-oriented processes. At block 1014, the task operations module may generate and display a visual cue and productivity report, for example.") Raghu do not explicitly teach all of by the large language model, However, Hattangady. teach by the large language model, (Col lines 11- 27 "The example system 100 generates a suggested message using a generative AI model 108, which may be an LLM. According to an aspect, the system 100 includes a computing device 102 that may take a variety of forms, including, for example, desktop computers, laptops, tablets, smart phones, wearable devices, gaming devices/platforms, virtualized reality devices/platforms (e.g., virtual reality (VR), augmented reality (AR), mixed reality (MR)), etc. The computing device 102 has an operating system that provides a graphical user interface (GUI) that allows users to interact with the computing device 102 via graphical elements, such as application windows (e.g., display areas), buttons, icons, and the like. For example, the graphical elements are displayed on a display screen 104 of the computing device 102 and can be selected and manipulated via user inputs received via a variety of input device types (e.g., keyboard, mouse, stylus, touch, spoken commands, gesture).") See claim 1 for rationale. Claims 6, 14, 7, and 15 are rejected under 35 U.S.C. 103 as obvious over US Patent US 20170193349 A1, (Jothilingam; Raghu.) in view of US Patent US 11962546 B1, (Hattangady; Poonam Ganesh) in view of US Patent US 20220385703 A1, (Joshi; Sachindra.) in further view of US Patent US 20220215351 A1, (Chow; Adam.) Claim 6 and 14 Regarding Claim 6 and 14 Raghu in view of Hattangady, in further view of Sachindra do not explicitly teach all of the method of claim 1, wherein performing the subsequent action comprises sending an electronic message on behalf of the user. However, Adam teaches the method of claim 1, wherein performing the subsequent action comprises sending an electronic message on behalf of the user. (Paragraph 27 "The email client 104 can send and receive emails 115 by utilizing an email server 110. The email server 110 can be a remote server that stores information about a user's email account, such as the emails 115 currently in the user's inbox, and provides that information to the user device 100. The email server 110 can also send outgoing emails on behalf of the user device 100, to be received by a separate server or device on the other end of the communication chain. In some examples, a secure email gateway (“SEG”) 120 can route email 115 to the email server 110. The SEG 120 can be either incorporated into the email server 110 or provided as a standalone gateway server. The SEG 120 can communicate with the management server 130 and implement rules for allowing or disallowing access to the email server 110. In one example, the SEG 120 can route email 115 to the mail server 110, but also send a copy of the email 115 to the management server 130 for task scheduling purposes." Paragraph 52 " At stage 315, the SEG 120 or mail server 110 can filter for task-related emails. This can include parsing the email by applying an ML model 135 to determine intents and slots, or simply looking for keywords. A task can be identified based on recognition of a relationship between sender and recipient, reference to a backend system 140, reference to a project, or reference to a document, among other ways. If a task is identified, the task-related email can be processed for task scheduling. In one example, this can include sending a copy of that email to the management server 130 for task processing.") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raghu in view of Hattangady, in further view of Sachindra to incorporate the teachings of Adam to provide a “The method of claim 1, wherein performing the subsequent action comprises sending an electronic message on behalf of the user.” Doing so would enable the individual to automate messages and other actions, as recognized by Adam. (Paragraph 27). Claim 7 and 15 Regarding Claim 7 and 15 Raghu in view of Hattangady, in further view of Sachindra do not explicitly teach all of the method of claim 1, wherein performing the subsequent action comprises sending input to an application programming interface. However, Adam teaches the method of claim 1, wherein performing the subsequent action comprises sending input to an application programming interface. (Paragraph 8 "The task recognized by the service can then be used as an input to the retrieved ML model, along with the available time slots. The ML model can then output a time within the available timeslots that the user is most likely to perform the task. The service can then schedule the task within the time slot based on the result from the machine learning model. To schedule the task, the service can make an application programming interface (“API”) call to the calendar application or backend database, in an example. This can cause the task to show up on the user device within a calendar or to-do list that displays on the user device. A notification can also display on the user device as the time draws near for performing the task.") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raghu in view of Hattangady, in further view of Sachindra to incorporate the teachings of Adam to provide a “the method of claim 1, wherein performing the subsequent action comprises sending input to an application programming interface.” Doing so would remind the user to complete a task, as recognized by Adam. (Paragraph 8). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALI M HASSAN whose telephone number is (571)272-5331. The examiner can normally be reached Monday - Friday 8:00am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALI M HASSAN/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 01/13/2026
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Oct 15, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103
Apr 10, 2026
Request for Continued Examination
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598014
CONTENT DRIVEN INTEGRATED BROADCAST SYSTEM WITH ONE OR MORE SELECTABLE AUTOMATED BROADCAST PERSONALITY AND METHOD FOR ITS USE
2y 5m to grant Granted Apr 07, 2026
Patent 12572852
LEXICAL DROPOUT FOR NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12541540
INFORMATION PROCESSING DEVICE, TERMINAL DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+33.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month