Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,849

SYSTEMS AND METHODS TO PRESENT VIEWS OF RECORDS IN CHAT SESSIONS BETWEEN USERS OF A COLLABORATION ENVIRONMENT

Final Rejection §103§DP
Filed
Dec 19, 2023
Examiner
HOANG, AMY P
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Asana, Inc.
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
163 granted / 232 resolved
+15.3% vs TC avg
Strong +64% interview lift
Without
With
+64.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
31 currently pending
Career history
263
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement submitted on 10/17/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The Amendment filed on 11/21/2025 has been entered. No claims are amended, added, or canceled. Claims 1-20 remain pending in the application. Response to Arguments Applicant's arguments filed 11/21/2025 have been fully considered. Each of applicant’s remarks is set forth, followed by examiner’s response. (1) In the Remarks, with respect to claim 1, Applicant argues that the Office Action interprets the "name of the remote user" in paragraph 40 of Jakobson as "content of the chat session" that allegedly provides a basis to "automatically identify a work unit record previously created and previously assigned within a collaboration environment." [Office Action, pp. 11-12]. Jakobson describes "the name of the remote user" as a basis for "displaying the tasks 508 the remote user 502 had delegated to the local user...." [Jakobson, 40]. The name of the remote user is not content extracted from a message, nor is it content entered by the local user and/or content entered by the remote user. As to point (1), Examiner respectfully disagrees. Examiner notes that the claims place no limitations on what and how “content” should be displayed or extracted from a chat session which includes a series of real-time communications between a first user and a second user of the collaboration environment. According to MPEP 2111, examiner is obliged to give the terms or phrases their broadest interpretation definition awarded by one of an ordinary skill in the art unless applicant has provided some indication of the definition of the claimed terms or phrases. Content of a chat session may comprise text, image, icons or links, etc. Jakobson illustrates in Fig. 5A the name of the remote user as part of the content of the chat session 500. Using the identification of the remote user, Jakobson identifies tasks the remote user had delegated to the local user ([0040]). Thus, Jakobson is considered to teach “automatically identify a work unit record previously created and previously assigned within a collaboration environment based on content of a chat session including a series of real-time communications between a first user and a second user of the collaboration environment” as recited in claim 1. (2) Applicant alleges Prakash describes a step of "determine whether any messages include content indicative of an upcoming task for the user to complete and/or an upcoming event for the user to attend." [Prakash, 16]. Prakash does not distinguish between message content that was entered by the user-of-interest, and message content entered by other participants in the user's messages, let alone that content entered by both the user and another participant provides the basis for identifying "content indicative of an upcoming task for the user to complete and/or an upcoming event for the user to attend." As to point (2), Examiner respectfully disagrees. The claim recited “the content of the chat session … includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user”. Examiner notes that one of an ordinary skill in the art would know that device messaging activities (e.g., text messages, chats, etc.) include entered contents from multiple participants. Prakash teaches the mobile computing device may monitor communication messages corresponding to the user's e-mail messaging activities, device messaging activities (e.g., text messages, chats, etc.), social networking activities (e.g., comments, chats, posts, messages, etc.), device voice command activities, and/or any other type of communication activity by the user of the mobile computing device. In doing so, the mobile computing device may analyze the communication messages to determine whether any messages include content indicative of an upcoming task for the user to complete and/or an upcoming event for the user to attend. Thus, Prakash is considered to teach “wherein the content of the chat session which provides a basis for automatically identifying the work unit record includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user”. (3) Applicant further alleges the Office Action fails to establish 1) a person of ordinary skill in the art would modify Jakobson as proposed, and/or 2) the combination of Jakobson and Prakash would arrive at the claimed invention. Instead, even if combined, the combination of Jakobson and Prakash would not arrive at the claimed invention of at least claim 1. A person of ordinary skill in the art would not modify Jakobson to use the content of a message indicative of an upcoming task, as allegedly taught by Prakash, as the basis for identifying Jakobson's "delegated tasks" and/or any other tasks (see Remarks page 11-13). As to point (3), in response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Jakobson teaches identifying previously created and previously assigned tasks from the identifier "Sydney.sub.--123" of the user displayed in the chat session ([0040] FIGS. 5A & 5B). Jakobson does not explicitly disclose “the content of the chat session … includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user”. Prakash, in the same field of endeavor, teaches generating tasks from communication messages includes a mobile computing device for monitoring communication messages corresponding to the user's e-mail messaging activities, device messaging activities (e.g., text messages, chats, etc.), social networking activities (e.g., comments, chats, posts, messages, etc.) by analyzing the communication messages to determine whether any messages include content indicative of an upcoming task for the user to complete and/or an upcoming event for the user to attend (Fig. 1; [0016]). It would have been obvious to one of ordinary skill in the art to modify the system of Jakobson to add identifying tasks from communication messages in the chat as suggested in Prakash to automatically present upcoming tasks without intensive interaction by the user (Prakash, [0002]). Claims 1-20 remain rejected (see below). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 11902344B2. In the table below, the left side is parts of claims 1-10 in the current application while the right side is the claims and text that conflict with the parts of claims 1-9. 18/544,849 (Present application) US Patent 11902344B2 1. A system configured to present views of work unit records in chat sessions between users of a collaboration environment, the system comprising: one or more physical processors configured to execute machine-readable instructions to: automatically identify a work unit record previously created and previously assigned within a collaboration environment based on content of a chat session including a series of real-time communications between a first user and a second user of the collaboration environment, the collaboration environment managing work unit records describing units of work previously created and previously assigned within the collaboration environment to users who are expected to accomplish one or more actions to complete the units of work, wherein the chat session facilitates synchronous communication between the first user and the second user through graphical chat interfaces, and wherein the content of the chat session which provides a basis for automatically identifying the work unit record includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user; and in response to identifying the work unit record, generate and present views of a work unit page concurrently with the graphical chat interfaces of the chat session so that the views of the work unit page appear with the graphical chat interfaces, wherein the work unit page corresponds to the work unit record, and wherein the work unit page displays editable values of one or more parameters of the work unit record. 10. The system of claim 1, wherein the work unit page corresponds to the work unit record by presenting information stored in the work unit record, and the views of the work unit page provide access to the work unit record so that the work unit record is editable by the first user and/or the second user via the views of the work unit page presented concurrently with the graphical chat interfaces of the chat session. 1. A system configured to present views of work unit records in chat sessions between users of a collaboration environment, the system comprising: one or more physical processors configured to execute machine-readable instructions to: manage environment state information maintaining a collaboration environment, the collaboration environment being configured to facilitate interaction by users with the collaboration environment, the environment state information including work unit records, the work unit records describing units of work previously created within the collaboration environment, and previously assigned within the collaboration environment to the users who are expected to accomplish one or more actions to complete the units of work; automatically identify a first work unit record previously created and previously assigned within the collaboration environment based on content of a chat session including a series of real-time communications between a first user and a second user, wherein the chat session facilitates synchronous communication between the first user and the second user through graphical chat interfaces, and wherein the content of the chat session which provides a basis for automatically identifying the first work unit record includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user; in response to identifying the first work unit record, generate and present views of a first work unit page concurrently with the graphical chat interfaces of the chat session so that the views of the first work unit page appear alongside the graphical chat interfaces, the first work unit page corresponding to the first work unit record; and wherein the first work unit page of the first work unit record displays editable values of one or more parameters of the first work unit record. the first work unit page presenting information stored in the first work unit record, the views of the first work unit page providing access to the first work unit record so that the first work unit record is editable by the first user and/or the second user via the views of the first work unit page presented concurrently with the graphical chat interfaces of the chat session. 2. The system of claim 1, wherein the graphical chat interfaces are presented within the collaboration environment, and the views of the work unit page appear alongside the graphical chat interfaces. 2. The system of claim 1, wherein the graphical chat interfaces are presented within the collaboration environment. 3. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify potential content in the content of the chat session that leads to identification of the work unit record. 3. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify potential content in the content of the chat session that leads to identification of the first work unit record. 4. The system of claim 3, wherein the potential content includes trigger phrases and/or words. 4. The system of claim 3, wherein the potential content includes trigger phrases and/or words. 5. The system of claim 4, wherein the trigger phrases and/or words include one or more of a name of a user, a title of a unit of work, a description of a unit of work, or a date associated with a unit of work. 5. The system of claim 4, wherein the trigger phrases and/or words include one or more of a name of a user, a title of a unit of work, a description of a unit of work, or a date associated with a unit of work. 6. The system of claim 1, wherein the graphical chat interfaces are presented outside of the collaboration environment. 6. The system of claim 1, wherein the graphical chat interfaces are presented outside of the collaboration environment. 7. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify an objective record based on the content of the chat session; and generate and present views of an objective page concurrently with the graphical chat interfaces of the chat session that corresponds to the objective record. 7. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify an objective record based on the content of the chat session; and generate and present views of a first objective page concurrently with the graphical chat interfaces of the chat session that corresponds to the objective record. 8. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify a project record based on the content of the chat session; and generate and present views of a project page concurrently with the graphical chat interfaces of the chat session that corresponds to the project record. 8. The system of claim 1, wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify a project record based on the content of the chat session; and generate and present views of a first project page concurrently with the graphical chat interfaces of the chat session that corresponds to the project record. 9. The system of claim 1, wherein the work unit record is assigned to a third user. 9. The system of claim 1, wherein the first work unit record is assigned to a third user. Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 1-10 are rejected under obviousness double patenting because claims 1-10 are anticipated by claims 1-9 of U.S. Patent No. 11902344B2. Since in the instant application, claims 11-20 are corresponding to claims 1-10, therefore they are rejected for the same reason as claims 1-10 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 9-10, 11-12 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jakobson, US 20080091782 A1, in view of Prakash et all. (hereinafter Prakash), US 20140189017 A1. Regarding independent claim 1, Jakobson teaches a system configured to present views of work unit records in chat sessions between users of a collaboration environment ([0018] The present invention provides a method and system for creating, delegating, exchanging and managing tasks over an instant messenger (or "IM") infrastructure. Modules for handling tasks may be added to applications capable of IM over a network; [0019] Tasks enabled-modules may operate in two modes: "live conversation mode" and "tab dialog mode". A live conversation mode implies that two or more users are engaged in a live IM session (such as a chat session) and tasks are exchanged among them in real time. All tasks displayed and handled in live conversation mode may be in context of the users engaged in the live session), the system comprising: one or more physical processors configured to execute machine-readable instructions to: automatically identify a work unit record previously created and previously assigned within a collaboration environment based on content of a chat session including a series of real-time communications between a first user and a second user of the collaboration environment ([0040] FIGS. 5A & 5B are generalized block diagrams illustrating the interaction of a "live conversation mode" and a "content tab mode" tasks modules within an IM framework, according to one embodiment of the present invention. Referring to FIG. 5A, the user of the tasks module 500 of an IM application may be in a chat session with a remote user "Sydney.sub.--123" 502 over communications network 501. The user of "live conversation" tasks module 500 may see two windows: A "My Tasks" window 506 displaying the tasks 508 the remote user 502 had delegated to the local user (the terms local user and remote user are used for discussion purposes to identify two or more users who have communicated task information, and do not require users to be physically separated); and, a "Tasks I Delegated" window 510 listing the tasks the local user had delegated to one or more remote users 502. A label 504 denoting the name of the remote user associated with the displayed tasks may also be displayed in the "Tasks I Delegated" window 510. The "Tasks I Delegated" window 510 may display all delegated tasks, or may filter delegated tasks according to one or more criteria, such as the user the tasks have been delegated to, the due date and/or due time of the tasks, the delegation date and/or delegation time of the tasks, the status of the task, etc. Examiner note: the name of the remote user "Sydney.sub.--123" is a content of the chat session which is based on identify previously created and previously assigned tasks), the collaboration environment managing work unit records describing units of work previously created and previously assigned within the collaboration environment to users who are expected to accomplish one or more actions to complete the units of work ([0020] FIG. 1. is a generalized block diagram illustrating an instant messenger system 100 with enhancement modules to handle tasks, according to one embodiment. Client devices 102a and 102b (e.g. personal computers, laptops, personal digital assistants, cellular phones, etc.) may allow the users of these devices to communicate with one another over a network 106a and 106b (e.g. the internet, an intranet, wireless network, peer-to-peer network, etc). Client device 102a may employee an application 104a capable of instant messaging. Client device 102b may also employee an application 104b capable of instant messaging. Instant messages (e.g. chat text) between client instant messenger applications 104a and 104b may be transmitted over a network 106a/106b, and relayed by an instant messenger server infrastructure 108; [0021] Client Instant messaging application 104a may support a tasks module 110a (the tasks module may be a plug-in, a plug-in being a computer program which registers itself with, and provides functionality to a host computer application) which may use application programming interface (API) 112a to communicate with host IM application 104a. Similarly, client instant messaging application 104b may support a tasks module 110b which may use application programming interface (API) 112b to communicate with application 104b; [0022] A user using instant messenger application 104a on client device 102a may use the interface of tasks module 110a to assign tasks to the user of device 102b. The user of device 102b may see the tasks in tasks module 110b of client instant messenger application 104b. In the currently preferred embodiment of the present invention, tasks module 110a may contain a list of tasks assigned through tasks module 110b, in a manner analogous to instant messenger application 104a displaying text messages typed in instant messenger application 104b. Similarly, tasks module 110b may contain a list of tasks assigned through tasks module 110a), wherein the chat session facilitates synchronous communication between the first user and the second user through graphical chat interfaces ([0032] FIG. 3. is a generalized block diagram illustrating a tasks module incorporated into an IM framework, according to one embodiment of a "conversation model" of the present invention. Unless stated otherwise, "Conversation Model" refers to fact that a user of IM application 300 is using the tasks module 304 as part of a live session with another user; [0033] A session may include data 310 & 316 exchanged between IM application 300 and a remote IM application. Conventional IM applications use data 310 containing chat text, for example "How are you?", which is received by an IM application 300 on a client's machine, from a communications network 312 (such as the Internet or any other type of communications network capable of supporting a chat session). Data 310 may be processed by a chat engine 314 and presented to the user of the IM application 300 in a chat display window 302 as chat text 306. Data pertaining to tasks 316 may be transmitted similarly over the same communications network 312 as part of the same chat session, and may be received by IM application 300); and in response to identifying the work unit record, generate and present views of a work unit page concurrently with the graphical chat interfaces of the chat session so that the views of the work unit page appear with the graphical chat interfaces, wherein the work unit page corresponds to the work unit record (Fig. 5A, 506, 510; [0040] Referring to FIG. 5A, the user of the tasks module 500 of an IM application may be in a chat session with a remote user "Sydney.sub.--123" 502 over communications network 501. The user of "live conversation" tasks module 500 may see two windows: A "My Tasks" window 506 displaying the tasks 508 the remote user 502 had delegated to the local user (the terms local user and remote user are used for discussion purposes to identify two or more users who have communicated task information, and do not require users to be physically separated); and, a "Tasks I Delegated" window 510 listing the tasks the local user had delegated to one or more remote users 502; [0048] At step 706 the user interface of the tasks module may be rendered. This may be done in a window adjacent to the main IM application window), and wherein the work unit page displays editable values of one or more parameters of the work unit record ([0042] In the "Content Tab Mode" of the embodiment illustrated in FIG. 5B, the user may access a task 522 displayed in tasks module 518. The user may be able to select a category of tasks (for example, based on the user who had assigned the tasks 520) and retrieve 524 the tasks from a task record 514. Drop-down box 520 may be populated by all unique user names in the task records (or may include all unique user names in the contacts list of the IM 500, or may include only those unique user names which have a current task, etc.), such that the local user may choose to see tasks assigned by any particular remote user. The local user may change the attributes of a task, such as marking it as complete (for example, by pressing the "done" button 526.) A change in one or more attributes of a task 522 may be recorded in that task's record 514. (e.g. the completed flag 5169 may be changed from a "false" to a "true" to indicate completion)). Jakobson does not explicitly disclose wherein the content of the chat session which provides a basis for automatically identifying the work unit record includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user. However, in the same field of endeavor, Prakash teaches wherein the content of the chat session which provides a basis for automatically identifying the work unit record includes first content entered by the first user during the chat session and second content entered by the second user during the chat session in accordance with the series of real-time communications between the first user and the second user ([0016] Referring now to FIG. 1, in the illustrative embodiment, a system 100 for comprehensive task management includes a mobile computing device 110, a cloud server 130, one or more online service providers 150, and a network 180. In use, the mobile computing device 110 may monitor the communication activities of a user on the mobile computing device 110 and generate or update a to-do list, managed on the mobile computing device 110, based on such communication activities. For example, in some embodiments, the mobile computing device 110 may monitor communication messages corresponding to the user's e-mail messaging activities, device messaging activities (e.g., text messages, chats, etc.), social networking activities (e.g., comments, chats, posts, messages, etc.), device voice command activities, and/or any other type of communication activity by the user of the mobile computing device 110. In doing so, the mobile computing device 110 may analyze the communication messages to determine whether any messages include content indicative of an upcoming task for the user to complete and/or an upcoming event for the user to attend. For each communication message determined to include such content, the mobile computing device 110 may generate a corresponding task. In some embodiments, each of the tasks generated by the mobile computing device 110 may be aggregated to generate a global to-do list (e.g., a global task list) for the user). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of monitoring communication messages and analyzing the communication messages to determine whether any messages include content indicative of an upcoming task for the user to complete as suggested in Prakash into Jakobson’s system because both of these systems are addressing identifying existing tasks based on content extracted from communication messages. This modification would have been motivated by the desire to automatically present upcoming tasks without intensive interaction by the user (Prakash, [0002]). Regarding dependent claim 2, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. Jakobson further teaches wherein the graphical chat interfaces are presented within the collaboration environment ([0062] FIGS. 9A and 9B are a generalized block diagrams illustrating the utilization of an IM application and infrastructure to facilitate an exchange of tasks among information management applications, according to one embodiment of the present invention. An IM application 902 and an information management application 904 (e.g. Microsoft Outlook.RTM., Lotus Notes.RTM., Palm.RTM. and Blackberry.RTM. management applications) may run on client machine 900. Information management application 904 may be commercial software such as Microsoft Outlook.RTM., Lotus Notes.RTM., handheld-device interface software, or any application which allows a user to track tasks. Information management application 904 may include functionality to interface with e-mail 906a, a calendar 906b, a tasks manager/to-do-list 906c, and other functionality and tools), and the views of the work unit page appear alongside the graphical chat interfaces (Fig. 5A, 506, 510; [0040] Referring to FIG. 5A, the user of the tasks module 500 of an IM application may be in a chat session with a remote user "Sydney.sub.--123" 502 over communications network 501. The user of "live conversation" tasks module 500 may see two windows: A "My Tasks" window 506 displaying the tasks 508 the remote user 502 had delegated to the local user (the terms local user and remote user are used for discussion purposes to identify two or more users who have communicated task information, and do not require users to be physically separated); and, a "Tasks I Delegated" window 510 listing the tasks the local user had delegated to one or more remote users 502; [0048] At step 706 the user interface of the tasks module may be rendered. This may be done in a window adjacent to the main IM application window). Regarding dependent claim 9, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. Jakobson further teaches wherein the work unit record is assigned to a third user ([0034] Tasks data 316 received by IM application 300 may be processed by a task engine 318 and received by tasks module 304. Tasks module 304 may process the task data 316 and display it for the user as itemized tasks 308. Task data 316 may be recorded on, and/or retrieved from, a storage device 320 accessible to IM application 300. A user may update, delete, or add to tasks 308 using IM application 300 or another application (for example, a user checking a checkmark to mark a task as complete). Task update information may be processed 318 and transmitted over network 312, as data packet 316, to the remote IM application. In the presently preferred embodiment the task update information is transmitted in live session with IM application 300. In this manner the task update information may be transmitted as part of the Conversation Model to other users who are assigned tasks, have assigned tasks to the user updating the task information, or to users designated to receive task update information (for example, users who are merely made aware that one user has assigned a task to another user)). Regarding dependent claim 10, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. Jakobson further teaches wherein the work unit page corresponds to the work unit record by presenting information stored in the work unit record, and the views of the work unit page provide access to the work unit record so that the work unit record is editable by the first user and/or the second user via the views of the work unit page presented concurrently with the graphical chat interfaces of the chat session ([0042] In the "Content Tab Mode" of the embodiment illustrated in FIG. 5B, the user may access a task 522 displayed in tasks module 518. The user may be able to select a category of tasks (for example, based on the user who had assigned the tasks 520) and retrieve 524 the tasks from a task record 514. Drop-down box 520 may be populated by all unique user names in the task records (or may include all unique user names in the contacts list of the IM 500, or may include only those unique user names which have a current task, etc.), such that the local user may choose to see tasks assigned by any particular remote user. The local user may change the attributes of a task, such as marking it as complete (for example, by pressing the "done" button 526.) A change in one or more attributes of a task 522 may be recorded in that task's record 514. (e.g. the completed flag 5169 may be changed from a "false" to a "true" to indicate completion)). Regarding independent claim 11, it is a method claim that corresponding to the system of claim 1. Therefore, it is rejected for the same reason as claim 1 above. Regarding dependent claim 12, it is a method claim that corresponding to the system of claim 2. Therefore, it is rejected for the same reason as claim 2 above. Regarding dependent claim 19, it is a method claim that corresponding to the system of claim 9. Therefore, it is rejected for the same reason as claim 9 above. Regarding dependent claim 20, it is a method claim that corresponding to the system of claim 10. Therefore, it is rejected for the same reason as claim 10 above. Claims 3-6 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Jakobson, in view of Prakash as applied in claims 1 and 11, further in view of Khan et al. (hereinafter Khan), US 20180341928 A1. Regarding dependent claim 3, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. The combination of Jakobson and Prakash does not explicitly disclose wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify potential content in the content of the chat session that leads to identification of the first work unit record. However, in the same field of endeavor, Khan teaches wherein the one or more physical processors are further configured to execute the machine-readable instructions to (Fig. 6, 610; [0068]): identify potential content in the content of the chat session that leads to identification of the work unit record (Fig 2; Figs. 3A-3B; [0036] By way of illustration, FIG. 3A depicts an example scenario in which communication application 116 is executing on device 110. As shown in FIG. 3A, user 130A (here, ‘Bob’) can provide/input communication/message 330A (“Can you grab . . . ”); [0037] At operation 220, the communication (e.g., the communication received at operation 210) is processed. In doing so, a content element (or multiple content elements) can be identified within or otherwise extracted from the communication. In certain implementations, such a content element can include but it not limited to an intent, an entity, or an action (and/or parameters/values of such elements). For example, with respect to FIG. 3A, communication/message 330A can be processed (e.g., by content processing engine 142) to identify or extract various content elements such as content element 350A (‘grab some food’ which corresponds to the intent ‘order food’); [0039] By way of illustration, as shown in FIG. 3A, user 130B (here, ‘Anne’) can provide/input communication/message 330B (“Sure . . . order pizza?”); [0046] At operation 260, a task is identified (e.g., by task management engine 144 and/or server 140). In certain implementations, such a task is identified based on an association between various content elements (e.g., the content elements associated at operation 250). As noted above, the referenced tasks can be action items or activities to be performed or completed (e.g., by a user). For example, as described above, during conversation 370A (as shown in FIG. 3A), various content elements (e.g., content elements 350A, 350B, and 350C) can be identified and associated with one another (e.g., within a knowledge base/conversational graph). Based on such association(s), a task (e.g., to order a ‘Pacific Veggie’ pizza) can be identified; [0054] In certain implementations, though a task may be initially assigned to one user, in certain scenarios such task can be reassigned, e.g., to another user. By way of illustration, FIG. 5A depicts an example scenario showing a communication session 570A between two users (Bob and Anne) as depicted at device 110A (here, the device being used by ‘Anne’), while FIG. 5B depicts the same communication session as depicted at device 110B (here, the device being used by ‘Bob’). As shown in FIG. 5A, based on the various communications 530A-530B, it can be determined that ‘Anne’ is to be assigned with the task of ordering pizza. However, based on subsequent communications (e.g., communications 530C-530D), it may be necessary to adjust such assignment, e.g., by assigning the referenced task to another user (e.g., to Bob). Accordingly, as shown in FIG. 5B, though the referenced task (ordering pizza) was initially assigned to Anne, based on subsequent communications the task can be reassigned to another user (here, Bob)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of task identification and tracking using shared conversational context as suggested in Khan into Jakobson and Prakash’s system because both of these systems are addressing identifying existing tasks based on content extracted from conversational context. This modification would have been motivated by the desire to facilitate and/or track the performance of identified tasks (Khan, [0027]). Regarding dependent claim 4, the combination of Jakobson, Prakash, and Khan teaches all the limitations as set forth in the rejection of claim 3 that is incorporated. Khan further teaches wherein the potential content includes trigger phrases and/or words ([0037] For example, with respect to FIG. 3A, communication/message 330A can be processed (e.g., by content processing engine 142) to identify or extract various content elements such as content element 350A (‘grab some food’ which corresponds to the intent ‘order food’)). Regarding dependent claim 5, the combination of Jakobson, Prakash, and Khan teaches all the limitations as set forth in the rejection of claim 4 that is incorporated. Khan further teaches wherein the trigger phrases and/or words include one or more of a name of a user, a title of a unit of work, a description of a unit of work, or a date associated with a unit of work ([0054] In certain implementations, though a task may be initially assigned to one user, in certain scenarios such task can be reassigned, e.g., to another user. By way of illustration, FIG. 5A depicts an example scenario showing a communication session 570A between two users (Bob and Anne) as depicted at device 110A (here, the device being used by ‘Anne’), while FIG. 5B depicts the same communication session as depicted at device 110B (here, the device being used by ‘Bob’). As shown in FIG. 5A, based on the various communications 530A-530B, it can be determined that ‘Anne’ is to be assigned with the task of ordering pizza. However, based on subsequent communications (e.g., communications 530C-530D), it may be necessary to adjust such assignment, e.g., by assigning the referenced task to another user (e.g., to Bob). Accordingly, as shown in FIG. 5B, though the referenced task (ordering pizza) was initially assigned to Anne, based on subsequent communications the task can be reassigned to another user (here, Bob). Examiner notes that “grab food” is a description of the task). Regarding dependent claim 6, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. The combination of Jakobson and Prakash does not explicitly disclose wherein the graphical chat interfaces are presented outside of the collaboration environment. However, in the same field of endeavor, Khan teaches wherein the graphical chat interfaces are presented outside of the collaboration environment (Figs. 3A-3B; [0036] By way of illustration, FIG. 3A depicts an example scenario in which communication application 116 is executing on device 110. As shown in FIG. 3A, user 130A (here, ‘Bob’) can provide/input communication/message 330A (“Can you grab . . . ”); [0037] At operation 220, the communication (e.g., the communication received at operation 210) is processed. In doing so, a content element (or multiple content elements) can be identified within or otherwise extracted from the communication. In certain implementations, such a content element can include but it not limited to an intent, an entity, or an action (and/or parameters/values of such elements). For example, with respect to FIG. 3A, communication/message 330A can be processed (e.g., by content processing engine 142) to identify or extract various content elements such as content element 350A (‘grab some food’ which corresponds to the intent ‘order food’); [0039] By way of illustration, as shown in FIG. 3A, user 130B (here, ‘Anne’) can provide/input communication/message 330B (“Sure . . . order pizza?”); [0046] At operation 260, a task is identified (e.g., by task management engine 144 and/or server 140). In certain implementations, such a task is identified based on an association between various content elements (e.g., the content elements associated at operation 250). As noted above, the referenced tasks can be action items or activities to be performed or completed (e.g., by a user). For example, as described above, during conversation 370A (as shown in FIG. 3A), various content elements (e.g., content elements 350A, 350B, and 350C) can be identified and associated with one another (e.g., within a knowledge base/conversational graph). Based on such association(s), a task (e.g., to order a ‘Pacific Veggie’ pizza) can be identified). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of task identification and tracking using shared conversational context as suggested in Khan into Jakobson and Prakash’s system because both of these systems are addressing identifying existing tasks based on content extracted from conversational context. This modification would have been motivated by the desire to facilitate and/or track the performance of identified tasks (Khan, [0027]). Regarding dependent claim 13, it is a method claim that corresponding to the system of claim 3. Therefore, it is rejected for the same reason as claim 3 above. Regarding dependent claim 14, it is a method claim that corresponding to the system of claim 4. Therefore, it is rejected for the same reason as claim 4 above. Regarding dependent claim 15, it is a method claim that corresponding to the system of claim 5. Therefore, it is rejected for the same reason as claim 5 above. Regarding dependent claim 16, it is a method claim that corresponding to the system of claim 6. Therefore, it is rejected for the same reason as claim 6 above. Claims 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Jakobson, in view of Prakash as applied in claims 1 and 11, further in view of Wanderski et al. (hereinafter Wanderski), US 20180083792 A1. Regarding dependent claim 7, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. The combination of Jakobson and Prakash does not explicitly disclose wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify an objective record based on the content of the chat session; and generate and present views of an objective page concurrently with the graphical chat interfaces of the chat session that corresponds to the objective record. However, in the same field of endeavor, Wanderski teaches the one or more physical processors are further configured to execute the machine-readable instructions to (Figs. 6A-6B; [0117]): identify an objective record based on the content of the chat session ([0103] Referring to FIG. 5, at 500, the system may be configured to monitor one or more ongoing chat communication sessions between users or members of an organization. At 502, the system may identify a piece of information exchanged during the chat communication session. For example, during chat communication sessions, the system may utilize text analytics to parse and interpret language or data exchanged during the communication session); and generate and present views of an objective page concurrently with the graphical chat interfaces of the chat session that corresponds to the objective record ([0104]-[0107] At 504, the system may compare the piece of information with profile information of a first user … at 506, a signal to an electronic device operated by the user to suggest assigning a tag to the piece of information. For example, the system may identify the piece of information as corresponding to or being similar to information that was previous assigned a particular tag by the user, or relevant to a skill set, project, working group, or interest of the user … the system may identify, based on a text analytics analysis, that the piece of information relates to a particular workgroup or project (or any other suitable category of tags)); [0108] At 518, the system may receive a request from the electronic device operated by the first user or the second user to view the piece of information associated with the tag. For example, according to some embodiments, the system may transmit a signal to the electronic device operated by the first user and/or the second user to display a user interface configured to display the piece of information in response to the request). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of identifying a tag based on the content of the chat session as suggested in Wanderski into Jakobson and Prakash’s system because both of these systems are addressing identifying existing tasks based on content extracted from chat session. This modification would have been motivated by the desire to provide a mechanism for members of an organization to communication quickly and conveniently (Wanderski, [0003]). Regarding dependent claim 8, the combination of Jakobson and Prakash teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. The combination of Jakobson and Prakash does not explicitly disclose wherein the one or more physical processors are further configured to execute the machine-readable instructions to: identify a project record based on the content of the chat session; and generate and present views of a project page concurrently with the graphical chat interfaces of the chat session that corresponds to the project record. However, in the same field of endeavor, Wanderski teaches wherein the one or more physical processors are further configured to execute the machine-readable instructions to (Figs. 6A-6B; [0117]): identify a project record based on the content of the chat session ([0103] Referring to FIG. 5, at 500, the system may be configured to monitor one or more ongoing chat communication sessions between users or members of an organization. At 502, the system may identify a piece of information exchanged during the chat communication session. For example, during chat communication sessions, the system may utilize text analytics to parse and interpret language or data exchanged during the communication session); and generate and present views of a project page concurrently with the graphical chat interfaces of the chat session that corresponds to the project record ([0104]-[0107] At 504, the system may compare the piece of information with profile information of a first user … at 506, a signal to an electronic device operated by the user to suggest assigning a tag to the piece of information. For example, the system may identify the piece of information as corresponding to or being similar to information that was previous assigned a particular tag by the user, or relevant to a skill set, project, working group, or interest of the user … the system may identify, based on a text analytics analysis, that the piece of information relates to a particular workgroup or project (or any other suitable category of tags)); [0108] At 518, the system may receive a request from the electronic device operated by the first user or the second user to view the piece of information associated with the tag. For example, according to some embodiments, the system may transmit a signal to the electronic device operated by the first user and/or the second user to display a user interface configured to display the piece of information in response to the request). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of identifying a tag based on the content of the chat session as suggested in Wanderski into Jakobson and Prakash’s system because both of these systems are addressing identifying existing tasks based on content extracted from chat session. This modification would have been motivated by the desire to provide a mechanism for members of an organization to communication quickly and conveniently (Wanderski, [0003]). Regarding dependent claim 17, it is a method claim that corresponding to the system of claim 7. Therefore, it is rejected for the same reason as claim 7 above. Regarding dependent claim 18, it is a method claim that corresponding to the system of claim 8. Therefore, it is rejected for the same reason as claim 8 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. ENTREKIN et al. (US 20200341882 A1) discloses collaboration systems that enable users to participate in collaboration sessions from multiple locations. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY P HOANG whose telephone number is (469)295-9134. The examiner can normally be reached M-TH 8:30-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER WELCH can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMY P HOANG/Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Aug 27, 2025
Non-Final Rejection — §103, §DP
Nov 21, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602596
APPARATUS AND METHOD FOR VALIDATING DATASET BASED ON FEATURE COVERAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12572263
ACCESS CARD WITH CONFIGURABLE RULES
2y 5m to grant Granted Mar 10, 2026
Patent 12536432
PRE-TRAINING METHOD OF NEURAL NETWORK MODEL, ELECTRONIC DEVICE AND MEDIUM
2y 5m to grant Granted Jan 27, 2026
Patent 12475669
METHOD AND APPARATUS WITH NEURAL NETWORK OPERATION FOR DATA NORMALIZATION
2y 5m to grant Granted Nov 18, 2025
Patent 12461595
SYSTEM AND METHOD FOR EMBEDDED COGNITIVE STATE METRIC SYSTEM
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+64.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month