DETAILED ACTION
This communication is a Final Office Action rejection on the merits. Claims 1-6, 11-13, 16-18, 21-23, and 25-27 are currently pending and have been addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 01/26/2026 (related to the 103 Rejection) have been fully considered but are moot in view of new grounds of rejection. Applicant's amendments necessitated the new ground(s) of rejection presented in this Office action. Rejection based on a newly cited reference(s) follows.
Applicant's arguments filed on 01/26/2026 (related to the 101 Rejection) have been fully considered but they are not persuasive.
Applicant states, on pages 12-13, that amended claim 1 recites subject matter that goes beyond merely applying the alleged judicial exception on a computer and meaningfully limits the alleged judicial exception to improve the field of data privacy and data security. Further, amended claim 1 recites subject matter that goes beyond merely applying the alleged judicial exception on a computer and meaningfully limits the alleged judicial exception to improve the field of machine learning.
Examiner respectfully disagrees with Applicant. Step 2A Prong 2 - Claim 1 includes additional elements such as: a machine learning model; a classifier; an interface of a computing device; and a secure communication channel. The machine learning model is merely used to: identify correlations between different elements of the set of data (Paragraph 0294); output a score indicating a priority of the task (Paragraph 0209); and receive modifications of a task summary (Paragraph 0169, task accepted by the member; Paragraph 0256, a representative adding or removing tasks; Paragraph 0295, feedback to improve algorithm for generating correlations). The classifier is merely used to output a priority for the task based on the task data and the user model (Paragraph 0009). The interface is merely used to: receive a request to generate task summary data (Paragraphs 0046 & 0083); and receive interaction data (Paragraph 0065, additional member input is needed). The secure communication channel is merely used to obscure a portion of the user model that corresponds to user-sensitive data (Paragraph 0106, payment information). These elements of “machine learning model,” “classifier,” “interface,” and “secure communication channel” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element (MPEP 2106.05f). In this case, the machine learning model such as a classifier includes inputs (e.g., task parameters) and outputs (e.g., priority score). However, the claim and specification do not include any specific details about how the trained machine learning operates or classifies the data (see MPEP 2106.05(a) and 2024 AI Guidance, Example 47). The user interface is considered “field of use” since it’s just used to receive a request and provide a summary, but the interface is not improved (MPEP 2106.05h).
Step 2B - The user interface is considered a conventional computer function of “receiving and transmitting over a network” and “performing repetitive calculations” (MPEP 2106.05(d)). In this case, the user interface is merely used to arrange information (e.g., display a task summary in response to a request) in a manner that assists users in processing information more quickly, which is not sufficient to show an improvement in computer functionality (see MPEP 2106.05a). Also, the function of “obscuring a portion of the user model that corresponds to user-sensitive data” is a well-known function in the data security field (MPEP 2106.05(d)). Lastly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Thus, the claim is not patent eligible.
Independent claims 11 and 16 recite similar features and therefore are rejected for the same reasons as independent claim 1. Claims 2-6, 12-13, 17-18, 21-23, and 25-27 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1, 11, and 16.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 11-13, 16-18, 21-23, and 25-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more.
Independent Claim 1
Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to a method which is a statutory category.
Step 2A, Prong One - Claim 1 recites: A method comprising: determining task data that identifies tasks associated with a user, wherein the task data includes task parameters that represent one or more characteristics associated with the tasks; accessing a user model of the user, wherein the user model is updated based on historic user activity, and wherein the user model comprises attributes associated with the user; receiving a request to generate task summary data associated with the tasks; processing the task parameters and the user model through a model to generate a set of priority scores associated with the tasks, wherein the model is configured to determine a correlation between the task data and the set of priority scores and is trained based on data representing previous interactions with the user or with other users; generating, the task summary data that includes a subset of the tasks, wherein the subset of the tasks includes priority scores that exceed a task threshold, and wherein generating the task summary data includes determining a priority for each task of the subset of tasks, and wherein the the priority for the task based on the task data and the user model; transmitting the task summary data, wherein when the task summary data is received, a response message is dynamically displayed, wherein the response message includes elements that represent the subset of the tasks; receiving interaction data associated with an element that corresponds to a task of the subset of the tasks; updating the model with the interaction data; establishing a communication between an automated agent and the user after receiving the interaction data; obscuring a portion of the user model that corresponds to user-sensitive data; and transmitting the obscured user model and progress-status data of the task, wherein as the progress-status data and the obscured user model are received, the communication dynamically generates status messages associated with the task in real-time. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, “presenting a subset of tasks prioritized based on the user model and the task data” is considered managing personal behavior since it’s just “filtering content.” Also, the steps of “training” and “determining a correlation” are considered mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or mathematical calculations, then it falls within the “certain methods of organizing human activity” or “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a computer; a task-facilitation server; a message through an interface associated with a computing device; a user model; a machine learning model; a classifier; a chat-response message that includes interactive user-interface elements; and a secure communication channel.
The computer is merely used to: execute instructions (Paragraph 0013). The task-facilitation service is merely used to generate and provide task summary data to a member computing device to present a corresponding task summary (Paragraph 0027). The interface associated with a computing device is merely used to: receive a request to generate task summary data (Paragraphs 0046 & 0083); and receive interaction data (Paragraph 0065, additional member input is needed). The user model is merely used to include user data, such as information regarding the preferences, behavior, personality, and other similar characteristics of member (Paragraph 0211). The machine learning model is merely used to: identify correlations between different elements of the set of data (Paragraph 0294); output a score indicating a priority of the task (Paragraph 0209); and receive modifications of a task summary (Paragraph 0169, task accepted by the member; Paragraph 0256, a representative adding or removing tasks; Paragraph 0295, feedback to improve algorithm for generating correlations). The classifier is merely used to output a priority for the task based on the task data and the user model (Paragraph 0009). The interactive user-interface elements are merely used to modify task summary data (Paragraph 0186). The secure communication channel is merely used to obscure a portion of the user model that corresponds to user-sensitive data (Paragraph 0106, payment information). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “computer,” “task-facilitation server,” “interface associated with a computing device,” “user model,”” “machine learning model,” “classifier,” “interactive user-interface elements,” and “secure communication channel” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Also, the user interface is considered “field of use” since it’s just used to receive a request and provide a summary, but the interface is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of generating task summary data using a user model of a user and task data for tasks associated with the user. The specification shows that the computer is merely used to: execute instructions (Paragraph 0013). The task-facilitation service is merely used to generate and provide task summary data to a member computing device to present a corresponding task summary (Paragraph 0027). The interface associated with a computing device is merely used to: receive a request to generate task summary data (Paragraphs 0046 & 0083); and receive interaction data (Paragraph 0065, additional member input is needed). The user model is merely used to include user data, such as information regarding the preferences, behavior, personality, and other similar characteristics of member (Paragraph 0211). The machine learning model is merely used to: identify correlations between different elements of the set of data (Paragraph 0294); output a score indicating a priority of the task (Paragraph 0209); and receive modifications of a task summary (Paragraph 0169, task accepted by the member; Paragraph 0256, a representative adding or removing tasks; Paragraph 0295, feedback to improve algorithm for generating correlations). The classifier is merely used to output a priority for the task based on the task data and the user model (Paragraph 0009). The interactive user-interface elements are merely used to modify task summary data (Paragraph 0186). The secure communication channel is merely used to obscure a portion of the user model that corresponds to user-sensitive data (Paragraph 0106, payment information). In this case, the machine learning model includes inputs (e.g., task parameters) and outputs (e.g., priority score). However, the claim and specification do not include any specific details about how the trained machine learning operates or classifies the data (see MPEP 2106.05(a) and 2024 AI Guidance, Example 47). Also, the steps of “receiving updated messages and presenting updated task summary data” are considered a well-understood, routing, and conventional function since they’re just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Further, instructions to display and/or arrange information in a graphical user interface may not be sufficient to show an improvement in computer-functionality (MPEP 2106.05a). In this case, the user interface is merely used to arrange information (e.g., presenting a visualization such as filtering data) in a manner that assists users in processing information more quickly, which is not sufficient to show an improvement in computer functionality (see MPEP 2106.05a). Lastly, the function of “obscuring a portion of the user model that corresponds to user-sensitive data” is a well-known function in the data security field (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Independent claim 11 is directed to an apparatus at step 1, which is a statutory category. Claim 11 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 11 further recites: one or more data processors; and a non-transitory computer-readable storage medium. The processor is merely used to execute instructions (Paragraph 0013). The non-transitory computer-readable storage medium is merely used to store instructions (Paragraph 0013). These elements of “processor” and “non-transitory computer-readable storage medium” are treated as just an explicit “processor/computer” for executing the operations and is treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, these limitations are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Independent claim 16 is directed to an article of manufacture at step 1, which is a statutory category. Claim 16 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 16 further recites: a non-transitory machine-readable storage medium; and a computing system. The non-transitory computer-readable storage medium is merely used to store instructions (Paragraph 0267). The computing system is merely used to execute instructions (Paragraph 0264). These elements of “non-transitory computer-readable storage medium” and “computing system” are treated as just an explicit “processor/computer” for executing the operations and is treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, these limitations are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Dependent claims 2-6, 12-13, 17-18, 21-23, and 25-27 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as wherein the computing device is further used to: receive an indication including an input from the user; receive a response to the reminder; identify a change to the user model resulting in an updated user model; modify the task summary; update the interface to present an updated task summary; and updating a model used in generating subsequent task summary data. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to certain methods of organizing human activity which include managing personal behavior (e.g. which tasks the user needs to perform first based on a priority). The additional functions of the computing device are considered “field of use” at Step 2A, Prong 2, since they’re merely used to collect information, analyze it, and display certain results of the collection and analysis of data (see MPEP 2106.05h). At Step 2B, this is conventional still, “receiving and transmitting over a network” and “performing repetitive calculations” (see MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6, 11-13, 16-18, 21-23, and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Sim et al. (US 2021/0373943 A1), in view of Jothilingam et al. (US 2017/0193349 A1), in further view of Fang et al. (US 2019/0251446 A1) and Raleigh et al. (US 2020/0045519 A1).
Regarding claim 1 (Currently Amended), Sim et al. discloses a computer-implemented method comprising (Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure):
determining, by a task-facilitation server, task data that identifies tasks associated with a user, wherein the task data includes task parameters that represent one or more characteristics associated with the tasks (Figure 6, item 602, Server; Paragraph 0017, The present disclosure relates to systems and methods for an interactive, intelligent hub built around the completion of a task. This hub brings together resources, information, suggested steps, and other automated assistance to facilitate the completion of the task. AI-based assistance may indicate which steps can be completed by automated processes, and dispatch those processes, or suggest resources to assist in the completion of other steps; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0029, The model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc; Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure. User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences. In other aspects, the user 302 may send some or all of the subtasks and other input 311 to the task hub 304; As stated in Paragraph 0061 of Applicant’s specification, task parameters may include member preferences or timeframe for completion);
accessing, by the task-facilitation server, a user model of the user, wherein the user model is updated based on historic user activity, and wherein the user model comprises attributes associated with the user (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; As stated in Paragraph 0097 of Applicant’s specification, attributes associated with the user may include member feedback corresponding to presented tasks/proposals);
receiving, by the task-facilitation server, a message through an interface associated with a computing device, wherein the message includes a request to generate task summary data associated with the tasks (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). The status 506 provides the user with information regarding the state of the task. The owner 508 indicates who is responsible for completing the subtask. Subtasks A and B indicate that the “hub” is the owner of the subtask meaning that they will be performed automatically by the task hub without user intervention. In aspects, the hub determines the owner of each subtask but a user may change the owner of the subtask by selecting the underlined owner name for each subtask entry. The info needed 510 indicates whether the task hub needs information or resources to completed. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task. The user may select the “Yes” hyperlink to see what information is needed and to provide the necessary information so that the hub may complete the subtask. Subtask D has been assigned to Delegate 1 which may be any other user who has access to the task hub; As stated in Figure 9 of Applicant’s specification, task summary may include progress/status of the task);
processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority … associated with the tasks, wherein the machine-learning model is configured to [learn] between the task data and the set of priority … (Figure 6, item 602, Server; Paragraph 0021, The task hub 102 may provide recommendations to the user 104 depending on the content of the task, user status, user feedback and personalized needs for information. For example, based on the search results, the task hub 102 may recommend to the user 104 to add a subtask of hiring a band; Paragraph 0026, Task hub model 202, such as task hub model 156 in FIG. 1, is shown as part of system 200. In aspects, the task hub model is a machine learning model, such as neural network and may be a recurrent neural network, a convolutional neural network, a transformer network machine learning model, and/or a multi-task neural network. Information from the user 206 is fed into the input 204. In aspects, the user may provide information 206 regarding the task. In other aspects, the user may provide information 206 regarding some or all of the subtasks and may even provide detail regarding some of sub-actions of the subtasks; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0042, At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that the machine learning can rank/prioritize the order of the subtasks based on task parameters such as user behavior, user preference, and/or task deadline (e.g., pick up flowers 48 hours in advanced)) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
generating, by the task-facilitation server, the task summary data that includes a subset, and wherein the subset of the tasks includes priority … (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations. The task hub 102 may provide selection of an action to take in support of the user (e.g., provide clarification, search web, show video, etc.) and selection a device(s) to support task completion. Considerations for task assistance include: the type of the task, user preferences, available devices and resources, and potential automatic breakdown of the task into steps; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0035, User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that Sim et al. provides recommendations/assistance for a task based on user behavior, user preference, and/or task deadline), and wherein generating the task summary data includes determining a priority for each task of the subset of tasks … (Paragraph 0022, In the planning aspects, the task hub 102 may provide a recommendation at every step of the interaction with the user 104 based on selection of the type of the recommendation and generation of the content of recommendation. In an example, a task hub model 156 uses a current definition of the step of the task, prior tasks, and/or future steps in the current task as context for determining the type of the recommendation that is to be provided as discussed in more detail with reference to FIG. 2. This model may be a seq2seq model that is paired with variational autoencoder for classification using a neural network such as, for example, a Bidirectional RNN. The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations. The task hub 102 may provide selection of an action to take in support of the user (e.g., provide clarification, search web, show video, etc.) and selection a device(s) to support task completion. Considerations for task assistance include: the type of the task, user preferences, available devices and resources, and potential automatic breakdown of the task into steps; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that Sim et al. provides recommendations/assistance for a task based on user behavior, user preference, and/or task deadline);
transmitting, by the task-facilitation server, the task summary data, wherein when the task summary data is received (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task; Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task), a chat-response message is dynamically displayed on the interface, wherein the chat-response message includes interactive user-interface elements that represent the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Paragraph 0056, The system 802 can implemented as a “smart phone” capable of running one or more applications (e.g., browser, e-mail, calendaring, contact managers, messaging clients, games, and media clients/players));
receiving, by the task-facilitation server, interaction data associated with an interactive user-interface element that corresponds to a task of the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Examiner interprets “providing the departure and return dates for the airfare” as the “interaction data”);
updating, by the task-facilitation server, the machine-learning model with the interaction data (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
establishing, by the task-facilitation server, a … communication channel between an automated agent and the user after receiving the interaction data; … (Paragraph 0032, Resources might comprise a list of preferred caterers, whereas to execute a subtask with a single caterer the task hub model 202 might need more specific slot information such as menu choice and credit card number; Paragraph 0038, After receiving user response 320, the task hub 304 may automatically perform the subtask (not shown). For example, the task hub 304 may automatically purchase the airline tickets from the airline website based on the subtasks/input 311, subtasks/input 312, the info response 316, and/or the info response 320. Or the task hub may automatically place an order for the type of flower selected by the user in the user response 320 from a vendor);
and transmitting, by the task-facilitation server, … and progress-status data of the task, wherein as the progress-status data … are received, the … communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0040, Regardless of whether additional action is required by the user 302 or not, the third party 308 sends confirmation 332 that the subtask is complete to the task hub 304. The task hub 304 then marks subtask A as complete 334. Optionally, the task hub 304 may send confirmation 336 to the user 302 that the subtask A is complete. The task hub may store the all information of the completed subtask in the task archives for later use (not shown)).
Although Sim et al. discloses using a machine learning model to generate a priority associated with the tasks based on task parameters (e.g., prioritize tasks based on previous user actions, learned preferences, and/or task deadline), Sim et al. does not specifically disclose wherein the priority is determined using a classifier.
However, Jothilingam et al. discloses processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority scores associated with the tasks, wherein the machine-learning model is configured to determine a correlation between the task data and the set of priority scores (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task as the task summary data that includes a subset of the tasks) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0024, In some examples, a computing system may construct predictive models for identifying and extracting tasks and related information using machine learning procedures that operate on training sets of annotated corpora of sentences or messages (e.g., machine learning features). In still other examples, machine learning may utilize task execution tracking for a user. Such tracking may involve: user behavior and interests derived from an initial questionnaire and applying the behavior and interests to the way the user executes the task; recognition of intent of the user for the task; whether the user is performing a particular task type in a particular way based on the end goal of that task; pattern identification; determining how the user is faring on a particular time of a year, month, week, day for a particular task type (for example, if user is on a holiday, the user may only want to look at those tasks which will be more refreshing and lightweight); determining the external factors that influence the user's task initiation, execution, and completion (for example, such factors may be family commitments, health issues, vacation, long business trip, and so on); determining whether the user has a behavior style before, during, and after a task execution; determining whether the user is picking up the tasks on time; determining whether the user is completing the tasks on time; determining whether the user is postponing the tasks relatively frequently; determining whether there are any particular type of tasks that the user postpones; determining whether the user completes any high priority tasks; determining whether the user postpones tasks regardless of the type of the tasks (e.g., adhoc versus priority tasks); determining whether the user consciously responds to fly-out reminders for updating status of tasks; determining rate at which the user interacts with task updates frequently to update the task on time; determining rate at which the user postpones task updates; determining rate at which the user clears task lists by immediately picking up the next task as soon as the user is done with a task; determining a self-discipline trait of the user from the user's task follow-ups (for example, determining if the user sets up a meeting request, dies the user diligently sending minutes of the meeting to close the particular task); determining how the user behaves while executing a particular type of task (for example, the user may take twice as long to perform coding task as compared to design tasks); and tracking the user task execution sequence, just to name some examples);
generating, by the task-facilitation server, the task summary data that includes a subset of the tasks, wherein the subset of the tasks includes priority scores that exceed a task threshold, and wherein generating the task summary data includes determining a priority for each task of the subset of tasks using a classifier, and wherein the classifier outputs the priority for the task based on the task data and the user model (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task (e.g., high priority) as the task summary data that includes a subset of the tasks. Also, Examiner interprets high priority as the threshold).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify wherein the priority is determined using a classifier of the invention of Jothilingam et al. because doing so would allow the method to use a priority of the task for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities (see Jothilingam et al., Paragraph 0052). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Sim et al. and Jothilingam et al. discloses using a machine learning model to generate a priority score associated with the tasks based on task parameters (e.g., prioritize tasks as high or low priority based on previous user actions, learned preferences, and/or task deadline). Although Examiner interprets tasks with high priority as the threshold, the combination of Sim et al. and Jothilingam et al. does not specifically disclose wherein the subset of the tasks includes priority scores that exceed a task threshold.
However, Fang et al. discloses wherein the subset of the tasks includes priority scores that exceed a task threshold (Figure 6, item 602, Server; Paragraph 0121, Additionally, the fashion recommendation system can provide one or more of the ranked items to a user. For example, the fashion recommendation system selects a threshold number of top items to present to a user via a client device associated with the user. In another example, the fashion recommendation system provides ranked items to a user that are above a threshold preference prediction score. As described above, the fashion recommendation system can provide one or more ranked items (e.g., personalized items) to the user upon the user's request or in response to a user's interaction with related items).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify a threshold used to generate the subset of tasks of the invention of Fang et al. because doing so would allow the method to provide ranked items to a user that are above a threshold preference prediction score (see Fang et al., Paragraph 0121). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Although Sim et al. discloses automatically booking/purchasing a service and/or inputting a credit card number (see Paragraph 0032 & 0038), the combination of Sim et al., Jothilingam et al., and Fang et al. does not specifically disclose wherein the sensitive information provided by the user is obscured (e.g., obscuring the credit card number).
However, Raleigh et al. discloses establishing, by the task-facilitation server, a secure communication channel between an … agent and the user after receiving the interaction data; obscuring, by the task-facilitation server, a portion of the user model that corresponds to user-sensitive data (Paragraph 0701, FIG. 141 illustrates a representative screen 10730 that details a particular payment means (e.g., credit card information). The user of the mobile wireless communication device 100 can input, review and update information related to the particular payment means through the UI 101 of the mobile wireless communication device 100. Some sensitive information, e.g., portions of or all digits of a credit card number, security codes, and expiration dates, can be obscured when presented through the UI 101 to provide added security);
and transmitting, by the task-facilitation server, the obscured user model and progress-status data of the task, wherein as the progress-status data and the obscured user model are received, the secure communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0491, User approval can be acquired, for example, by a simple click operation or require a secure password, key and/or biometric response from the user. Upon user approval, the billing agent 1695 generates a billing approval and sends it to the transaction server 134, the transaction server 134 completes the transaction and then sends a bill to the billing agent 1695. The billing agent 1695 optionally sends a confirmation to the transaction server 134 and sends the bill to the billing server 4630).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks based on priority scores (e.g., wherein one of the tasks may include a transaction for booking/purchasing a ticket) of the invention of Sim et al. and Fang et al. to further specify wherein the transaction is performed in a secure communication channel between an agent and the user of the invention of Raleigh et al. because doing so would allow the method to obscure portions of or all digits of a credit card number when presented though the user interface, which provides an extra layer of security (see Raleigh et al., Paragraph 0701). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 11 (Currently Amended), Sim et al. discloses a computing system comprising: one or more data processors; and a non-transitory computer-readable storage medium containing instructions which, when executed by the one or more data processors, cause the one or more data processors to perform operations including (Paragraph 0049, FIG. 7 is a block diagram illustrating physical components (e.g., hardware) of a computing device 700 with which aspects of the disclosure may be practiced. The computing device components described below may be suitable for the computing devices described above. In a basic configuration, the computing device 700 may include at least one processing unit 702 and a system memory 704. Depending on the configuration and type of computing device, the system memory 704 may comprise, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), flash memory, or any combination of such memories. The system memory 704 may include an operating system 705 and one or more program tools 706 suitable for performing the various aspects disclosed herein such. The operating system 705, for example, may be suitable for controlling the operation of the computing device 700; Paragraph 0053, The term computer readable media as used herein may include computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program tools):
determining, by a task-facilitation server, task data that identifies tasks associated with a user, wherein the task data includes task parameters that represent one or more characteristics associated with the tasks (Figure 6, item 602, Server; Paragraph 0017, The present disclosure relates to systems and methods for an interactive, intelligent hub built around the completion of a task. This hub brings together resources, information, suggested steps, and other automated assistance to facilitate the completion of the task. AI-based assistance may indicate which steps can be completed by automated processes, and dispatch those processes, or suggest resources to assist in the completion of other steps; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0029, The model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc; Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure. User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences. In other aspects, the user 302 may send some or all of the subtasks and other input 311 to the task hub 304; As stated in Paragraph 0061 of Applicant’s specification, task parameters may include member preferences or timeframe for completion);
accessing, by the task-facilitation server, a user model of the user, wherein the user model is updated based on historic user activity, and wherein the user model comprises attributes associated with the user (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; As stated in Paragraph 0097 of Applicant’s specification, attributes associated with the user may include member feedback corresponding to presented tasks/proposals);
receiving, by the task-facilitation server, a message through an interface associated with a computing device, wherein the message includes a request to generate task summary data associated with the tasks (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). The status 506 provides the user with information regarding the state of the task. The owner 508 indicates who is responsible for completing the subtask. Subtasks A and B indicate that the “hub” is the owner of the subtask meaning that they will be performed automatically by the task hub without user intervention. In aspects, the hub determines the owner of each subtask but a user may change the owner of the subtask by selecting the underlined owner name for each subtask entry. The info needed 510 indicates whether the task hub needs information or resources to completed. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task. The user may select the “Yes” hyperlink to see what information is needed and to provide the necessary information so that the hub may complete the subtask. Subtask D has been assigned to Delegate 1 which may be any other user who has access to the task hub; As stated in Figure 9 of Applicant’s specification, task summary may include progress/status of the task);
processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority … associated with the tasks, wherein the machine-learning model is configured to [learn] between the task data and the set of priority … (Figure 6, item 602, Server; Paragraph 0021, The task hub 102 may provide recommendations to the user 104 depending on the content of the task, user status, user feedback and personalized needs for information. For example, based on the search results, the task hub 102 may recommend to the user 104 to add a subtask of hiring a band; Paragraph 0026, Task hub model 202, such as task hub model 156 in FIG. 1, is shown as part of system 200. In aspects, the task hub model is a machine learning model, such as neural network and may be a recurrent neural network, a convolutional neural network, a transformer network machine learning model, and/or a multi-task neural network. Information from the user 206 is fed into the input 204. In aspects, the user may provide information 206 regarding the task. In other aspects, the user may provide information 206 regarding some or all of the subtasks and may even provide detail regarding some of sub-actions of the subtasks; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0042, At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that the machine learning can rank/prioritize the order of the subtasks based on task parameters such as user behavior, user preference, and/or task deadline (e.g., pick up flowers 48 hours in advanced)) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
generating, by the task-facilitation server, the task summary data that includes a subset, and wherein the subset of the tasks includes priority … (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations. The task hub 102 may provide selection of an action to take in support of the user (e.g., provide clarification, search web, show video, etc.) and selection a device(s) to support task completion. Considerations for task assistance include: the type of the task, user preferences, available devices and resources, and potential automatic breakdown of the task into steps; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0035, User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that Sim et al. provides recommendations/assistance for a task based on user behavior, user preference, and/or task deadline);
transmitting, by the task-facilitation server, the task summary data, wherein when the task summary data is received (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task; Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task), the chat-response message is dynamically displayed on the interface, wherein the chat-response message includes interactive user-interface elements that represent the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Paragraph 0056, The system 802 can implemented as a “smart phone” capable of running one or more applications (e.g., browser, e-mail, calendaring, contact managers, messaging clients, games, and media clients/players));
receiving, by the task-facilitation server, interaction data associated with an interactive user-interface element that corresponds to a task of the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Examiner interprets “providing the departure and return dates for the airfare” as the “interaction data”);
updating, by the task-facilitation server, the machine-learning model with the interaction data (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
establishing, by the task-facilitation server, a … communication channel between an automated agent and the user after receiving the interaction data; … (Paragraph 0032, Resources might comprise a list of preferred caterers, whereas to execute a subtask with a single caterer the task hub model 202 might need more specific slot information such as menu choice and credit card number; Paragraph 0038, After receiving user response 320, the task hub 304 may automatically perform the subtask (not shown). For example, the task hub 304 may automatically purchase the airline tickets from the airline website based on the subtasks/input 311, subtasks/input 312, the info response 316, and/or the info response 320. Or the task hub may automatically place an order for the type of flower selected by the user in the user response 320 from a vendor);
and transmitting, by the task-facilitation server, … and progress-status data of the task, wherein as the progress-status data … are received, the … communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0040, Regardless of whether additional action is required by the user 302 or not, the third party 308 sends confirmation 332 that the subtask is complete to the task hub 304. The task hub 304 then marks subtask A as complete 334. Optionally, the task hub 304 may send confirmation 336 to the user 302 that the subtask A is complete. The task hub may store the all information of the completed subtask in the task archives for later use (not shown)).
Although Sim et al. discloses using a machine learning model to generate a priority associated with the tasks based on task parameters (e.g., prioritize tasks based on previous user actions, learned preferences, and/or task deadline), Sim et al. does not specifically disclose wherein the priority is determined using a classifier.
However, Jothilingam et al. discloses processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority scores associated with the tasks, wherein the machine-learning model is configured to determine a correlation between the task data and the set of priority scores (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task as the task summary data that includes a subset of the tasks) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0024, In some examples, a computing system may construct predictive models for identifying and extracting tasks and related information using machine learning procedures that operate on training sets of annotated corpora of sentences or messages (e.g., machine learning features). In still other examples, machine learning may utilize task execution tracking for a user. Such tracking may involve: user behavior and interests derived from an initial questionnaire and applying the behavior and interests to the way the user executes the task; recognition of intent of the user for the task; whether the user is performing a particular task type in a particular way based on the end goal of that task; pattern identification; determining how the user is faring on a particular time of a year, month, week, day for a particular task type (for example, if user is on a holiday, the user may only want to look at those tasks which will be more refreshing and lightweight); determining the external factors that influence the user's task initiation, execution, and completion (for example, such factors may be family commitments, health issues, vacation, long business trip, and so on); determining whether the user has a behavior style before, during, and after a task execution; determining whether the user is picking up the tasks on time; determining whether the user is completing the tasks on time; determining whether the user is postponing the tasks relatively frequently; determining whether there are any particular type of tasks that the user postpones; determining whether the user completes any high priority tasks; determining whether the user postpones tasks regardless of the type of the tasks (e.g., adhoc versus priority tasks); determining whether the user consciously responds to fly-out reminders for updating status of tasks; determining rate at which the user interacts with task updates frequently to update the task on time; determining rate at which the user postpones task updates; determining rate at which the user clears task lists by immediately picking up the next task as soon as the user is done with a task; determining a self-discipline trait of the user from the user's task follow-ups (for example, determining if the user sets up a meeting request, dies the user diligently sending minutes of the meeting to close the particular task); determining how the user behaves while executing a particular type of task (for example, the user may take twice as long to perform coding task as compared to design tasks); and tracking the user task execution sequence, just to name some examples);
generating, by the task-facilitation server, the task summary data that includes a subset of the tasks, wherein the subset of the tasks includes priority scores that exceed a task threshold, and wherein generating the task summary data includes determining a priority for each task of the subset of tasks using a classifier, and wherein the classifier outputs the priority for the task based on the task data and the user model (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task (e.g., high priority) as the task summary data that includes a subset of the tasks. Also, Examiner interprets high priority as the threshold).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify wherein the priority is determined using a classifier of the invention of Jothilingam et al. because doing so would allow the method to use a priority of the task for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities (see Jothilingam et al., Paragraph 0052). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Sim et al. and Jothilingam et al. discloses using a machine learning model to generate a priority score associated with the tasks based on task parameters (e.g., prioritize tasks as high or low priority based on previous user actions, learned preferences, and/or task deadline). Although Examiner interprets tasks with high priority as the threshold, the combination of Sim et al. and Jothilingam et al. does not specifically disclose wherein the subset of the tasks includes priority scores that exceed a task threshold.
However, Fang et al. discloses wherein the subset of the tasks includes priority scores that exceed a task threshold (Figure 6, item 602, Server; Paragraph 0121, Additionally, the fashion recommendation system can provide one or more of the ranked items to a user. For example, the fashion recommendation system selects a threshold number of top items to present to a user via a client device associated with the user. In another example, the fashion recommendation system provides ranked items to a user that are above a threshold preference prediction score. As described above, the fashion recommendation system can provide one or more ranked items (e.g., personalized items) to the user upon the user's request or in response to a user's interaction with related items).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify a threshold used to generate the subset of tasks of the invention of Fang et al. because doing so would allow the method to provide ranked items to a user that are above a threshold preference prediction score (see Fang et al., Paragraph 0121). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Although Sim et al. discloses automatically booking/purchasing a service and/or inputting a credit card number (see Paragraph 0032 & 0038), the combination of Sim et al., Jothilingam et al., and Fang et al. does not specifically disclose wherein the sensitive information provided by the user is obscured (e.g., obscuring the credit card number).
However, Raleigh et al. discloses establishing, by the task-facilitation server, a secure communication channel between an … agent and the user after receiving the interaction data; obscuring, by the task-facilitation server, a portion of the user model that corresponds to user-sensitive data (Paragraph 0701, FIG. 141 illustrates a representative screen 10730 that details a particular payment means (e.g., credit card information). The user of the mobile wireless communication device 100 can input, review and update information related to the particular payment means through the UI 101 of the mobile wireless communication device 100. Some sensitive information, e.g., portions of or all digits of a credit card number, security codes, and expiration dates, can be obscured when presented through the UI 101 to provide added security);
and transmitting, by the task-facilitation server, the obscured user model and progress-status data of the task, wherein as the progress-status data and the obscured user model are received, the secure communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0491, User approval can be acquired, for example, by a simple click operation or require a secure password, key and/or biometric response from the user. Upon user approval, the billing agent 1695 generates a billing approval and sends it to the transaction server 134, the transaction server 134 completes the transaction and then sends a bill to the billing agent 1695. The billing agent 1695 optionally sends a confirmation to the transaction server 134 and sends the bill to the billing server 4630).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks based on priority scores (e.g., wherein one of the tasks may include a transaction for booking/purchasing a ticket) of the invention of Sim et al. and Fang et al. to further specify wherein the transaction is performed in a secure communication channel between an agent and the user of the invention of Raleigh et al. because doing so would allow the method to obscure portions of or all digits of a credit card number when presented though the user interface, which provides an extra layer of security (see Raleigh et al., Paragraph 0701). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 16 (Currently Amended), Sim et al. discloses a computer-program product tangibly embodied in a non-transitory machine-readable storage medium, including instructions configured to cause a computing system to perform operations including (Paragraph 0049, FIG. 7 is a block diagram illustrating physical components (e.g., hardware) of a computing device 700 with which aspects of the disclosure may be practiced. The computing device components described below may be suitable for the computing devices described above. In a basic configuration, the computing device 700 may include at least one processing unit 702 and a system memory 704. Depending on the configuration and type of computing device, the system memory 704 may comprise, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), flash memory, or any combination of such memories. The system memory 704 may include an operating system 705 and one or more program tools 706 suitable for performing the various aspects disclosed herein such. The operating system 705, for example, may be suitable for controlling the operation of the computing device 700; Paragraph 0053, The term computer readable media as used herein may include computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program tools):
determining, by a task-facilitation server, task data that identifies tasks associated with a user, wherein the task data includes task parameters that represent one or more characteristics associated with the tasks (Figure 6, item 602, Server; Paragraph 0017, The present disclosure relates to systems and methods for an interactive, intelligent hub built around the completion of a task. This hub brings together resources, information, suggested steps, and other automated assistance to facilitate the completion of the task. AI-based assistance may indicate which steps can be completed by automated processes, and dispatch those processes, or suggest resources to assist in the completion of other steps; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0029, The model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc; Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure. User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences. In other aspects, the user 302 may send some or all of the subtasks and other input 311 to the task hub 304; As stated in Paragraph 0061 of Applicant’s specification, task parameters may include member preferences or timeframe for completion);
accessing, by the task-facilitation server, a user model of the user, wherein the user model is updated based on historic user activity and wherein the user model comprises attributes associated with the user (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; As stated in Paragraph 0097 of Applicant’s specification, attributes associated with the user may include member feedback corresponding to presented tasks/proposals);
receiving, by the task-facilitation server, a message through an interface associated with a computing device, wherein the message includes a request to generate task summary data associated with the tasks (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). The status 506 provides the user with information regarding the state of the task. The owner 508 indicates who is responsible for completing the subtask. Subtasks A and B indicate that the “hub” is the owner of the subtask meaning that they will be performed automatically by the task hub without user intervention. In aspects, the hub determines the owner of each subtask but a user may change the owner of the subtask by selecting the underlined owner name for each subtask entry. The info needed 510 indicates whether the task hub needs information or resources to completed. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task. The user may select the “Yes” hyperlink to see what information is needed and to provide the necessary information so that the hub may complete the subtask. Subtask D has been assigned to Delegate 1 which may be any other user who has access to the task hub; As stated in Figure 9 of Applicant’s specification, task summary may include progress/status of the task);
processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority … associated with the tasks, wherein the machine-learning model is configured to [learn] between the task data and the set of priority … (Figure 6, item 602, Server; Paragraph 0021, The task hub 102 may provide recommendations to the user 104 depending on the content of the task, user status, user feedback and personalized needs for information. For example, based on the search results, the task hub 102 may recommend to the user 104 to add a subtask of hiring a band; Paragraph 0026, Task hub model 202, such as task hub model 156 in FIG. 1, is shown as part of system 200. In aspects, the task hub model is a machine learning model, such as neural network and may be a recurrent neural network, a convolutional neural network, a transformer network machine learning model, and/or a multi-task neural network. Information from the user 206 is fed into the input 204. In aspects, the user may provide information 206 regarding the task. In other aspects, the user may provide information 206 regarding some or all of the subtasks and may even provide detail regarding some of sub-actions of the subtasks; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0042, At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that the machine learning can rank/prioritize the order of the subtasks based on task parameters such as user behavior, user preference, and/or task deadline (e.g., pick up flowers 48 hours in advanced)) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
generating, by the task-facilitation server, the task summary data that includes a subset, and wherein the subset of the tasks includes priority … (Figure 6, item 602, Server; Paragraph 0022, The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations. The task hub 102 may provide selection of an action to take in support of the user (e.g., provide clarification, search web, show video, etc.) and selection a device(s) to support task completion. Considerations for task assistance include: the type of the task, user preferences, available devices and resources, and potential automatic breakdown of the task into steps; Paragraph 0029, The order of subtasks 222 provides the task agent in the task hub with the order that each subtask should be performed according to the input 204 that was fed into the task hub model 202. To determine the order of subtasks 222, the model first identifies any explicit dependencies between subtasks 216A and 216B. Then, the model may choose to rank tasks for execution based on a) availability of resources to complete them; b) any needed lead time (e.g., need to book a caterer several months in advance, need to pick up flowers no more than 48 hours in advance); and c) by grouping by proximity/relevance—some subtasks might be performed together at the same location or in a single online order, etc.; Paragraph 0035, User 302 sends a task 310 to task hub 304, such as “take a business trip” or “plan a wedding reception.” In aspects, the user may send this task through a semi-autonomous task application on a user device or directly to the task hub web service through the user device. The task hub 304 uses the task 310 to determine what subtasks and other input 312 (such as input 204 in FIG. 2) are needed for task 310. In aspects, the tasks/input 312 is received from the knowledge base/resources 306. The knowledge base/resources 306 may include any source of information available to the task hub 304, including without limitation, the task archives, the Internet, the task hub model, user resources, such as user accounts, and user preferences; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that Sim et al. provides recommendations/assistance for a task based on user behavior, user preference, and/or task deadline), and wherein generating the task summary data includes determining a priority for each task of the subset of tasks … (Paragraph 0022, In the planning aspects, the task hub 102 may provide a recommendation at every step of the interaction with the user 104 based on selection of the type of the recommendation and generation of the content of recommendation. In an example, a task hub model 156 uses a current definition of the step of the task, prior tasks, and/or future steps in the current task as context for determining the type of the recommendation that is to be provided as discussed in more detail with reference to FIG. 2. This model may be a seq2seq model that is paired with variational autoencoder for classification using a neural network such as, for example, a Bidirectional RNN. The task hub 102 may provide recommendations to the user depending on the content of the task, user status, user feedback and personalized needs for information. The task hub 102 may take into account a user's preferences for modality of the assistance/recommendations and an account of user's preferences for receiving certain types of assistance/recommendations. The task hub 102 may provide selection of an action to take in support of the user (e.g., provide clarification, search web, show video, etc.) and selection a device(s) to support task completion. Considerations for task assistance include: the type of the task, user preferences, available devices and resources, and potential automatic breakdown of the task into steps; Paragraph 0047, Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner notes that Sim et al. provides recommendations/assistance for a task based on user behavior, user preference, and/or task deadline);
transmitting, by the task-facilitation server, the task summary data, wherein when the task summary data is received (Figure 6, item 602, Server; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. For example, task B shows as in progress, that it is owned by the user (e.g., being performed by the user), and no information is needed to complete the task. Scroll 512 allows users to see all of the subtasks that are part of the active task in pane 502. Information that is underlined is hyperlinked to additional detail. For example, clicking on the “A” in the subtask list 504 brings up further information about subtask A including without limitation the subtask definitions described in connection with FIG. 2. In aspects, the user may edit these definitions through this view (not shown). In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task; Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task), the chat-response message is dynamically displayed on the interface, wherein the chat-response message includes interactive user-interface elements that represent the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Paragraph 0056, The system 802 can implemented as a “smart phone” capable of running one or more applications (e.g., browser, e-mail, calendaring, contact managers, messaging clients, games, and media clients/players));
receiving, by the task-facilitation server, interaction data associated with an interactive user-interface element that corresponds to a task of the subset of the tasks (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Examiner interprets “providing the departure and return dates for the airfare” as the “interaction data”);
updating, by the task-facilitation server, the machine-learning model with the interaction data (Paragraph 0042, At operation 402, the task hub receives a task from a user, a task application, a calendar application, or any other type of application capable of performing or assigning tasks. At operation 404, a list of subtasks is generated for the task as has been described herein with reference to FIGS. 2 and 3. At operation 405, the subtasks are placed in order that they should be completed based on the model as described in connection with FIG. 2. At decision 406, it is determined whether the first subtask is automatable—that it is capable of being performed automatically by the task hub without user intervention. In aspects, this determination is made using a machine learning. Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions; Examiner interprets the training set with previous user actions or preferences as the data representing previous interactions with the user);
establishing, by the task-facilitation server, a … communication channel between an automated agent and the user after receiving the interaction data; … (Paragraph 0032, Resources might comprise a list of preferred caterers, whereas to execute a subtask with a single caterer the task hub model 202 might need more specific slot information such as menu choice and credit card number; Paragraph 0038, After receiving user response 320, the task hub 304 may automatically perform the subtask (not shown). For example, the task hub 304 may automatically purchase the airline tickets from the airline website based on the subtasks/input 311, subtasks/input 312, the info response 316, and/or the info response 320. Or the task hub may automatically place an order for the type of flower selected by the user in the user response 320 from a vendor);
and transmitting, by the task-facilitation server, … and progress-status data of the task, wherein as the progress-status data … are received, the … communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0040, Regardless of whether additional action is required by the user 302 or not, the third party 308 sends confirmation 332 that the subtask is complete to the task hub 304. The task hub 304 then marks subtask A as complete 334. Optionally, the task hub 304 may send confirmation 336 to the user 302 that the subtask A is complete. The task hub may store the all information of the completed subtask in the task archives for later use (not shown)).
Although Sim et al. discloses using a machine learning model to generate a priority associated with the tasks based on task parameters (e.g., prioritize tasks based on previous user actions, learned preferences, and/or task deadline), Sim et al. does not specifically disclose wherein the priority is determined using a classifier.
However, Jothilingam et al. discloses processing, by the task-facilitation server, the task parameters and the user model through a machine-learning model to generate a set of priority scores associated with the tasks, wherein the machine-learning model is configured to determine a correlation between the task data and the set of priority scores (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task as the task summary data that includes a subset of the tasks) and is trained based on data representing previous interactions with the user or with other users (Paragraph 0024, In some examples, a computing system may construct predictive models for identifying and extracting tasks and related information using machine learning procedures that operate on training sets of annotated corpora of sentences or messages (e.g., machine learning features). In still other examples, machine learning may utilize task execution tracking for a user. Such tracking may involve: user behavior and interests derived from an initial questionnaire and applying the behavior and interests to the way the user executes the task; recognition of intent of the user for the task; whether the user is performing a particular task type in a particular way based on the end goal of that task; pattern identification; determining how the user is faring on a particular time of a year, month, week, day for a particular task type (for example, if user is on a holiday, the user may only want to look at those tasks which will be more refreshing and lightweight); determining the external factors that influence the user's task initiation, execution, and completion (for example, such factors may be family commitments, health issues, vacation, long business trip, and so on); determining whether the user has a behavior style before, during, and after a task execution; determining whether the user is picking up the tasks on time; determining whether the user is completing the tasks on time; determining whether the user is postponing the tasks relatively frequently; determining whether there are any particular type of tasks that the user postpones; determining whether the user completes any high priority tasks; determining whether the user postpones tasks regardless of the type of the tasks (e.g., adhoc versus priority tasks); determining whether the user consciously responds to fly-out reminders for updating status of tasks; determining rate at which the user interacts with task updates frequently to update the task on time; determining rate at which the user postpones task updates; determining rate at which the user clears task lists by immediately picking up the next task as soon as the user is done with a task; determining a self-discipline trait of the user from the user's task follow-ups (for example, determining if the user sets up a meeting request, dies the user diligently sending minutes of the meeting to close the particular task); determining how the user behaves while executing a particular type of task (for example, the user may take twice as long to perform coding task as compared to design tasks); and tracking the user task execution sequence, just to name some examples);
generating, by the task-facilitation server, the task summary data that includes a subset of the tasks, wherein the subset of the tasks includes priority scores that exceed a task threshold, and wherein generating the task summary data includes determining a priority for each task of the subset of tasks using a classifier, and wherein the classifier outputs the priority for the task based on the task data and the user model (Paragraph 0026, In some examples, a computing system may learn to improve predictive models and summarization used for extracting tasks and categorizing or prioritizing the tasks using historical performance of a user for particular types of tasks. For example, a user may tend to demonstrate similar levels of performance for multiple tasks that are of a particular task type. Based, at least in part, on such historical data, which may be quantified and/or stored by the computer system and subsequently applied to predictive models (e.g., machine learning models), for example, efficient organization of resources (e.g., time and hardware) may be achieved; Paragraph 0052, In some examples, a system performing task extraction process 304 may determine a measure of importance of a task, where a low-importance task is one for which the user would consider to be relatively low priority (e.g., low level of urgency) and a high-importance task is one for which the user would consider to be relatively high priority (e.g., high level of urgency). Importance of a task may be useful for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities. Determining importance of a task may be based, at least in part, on history of events of the user (e.g., follow-through and performance of past tasks, and so on) and/or history of events of the other user and/or personal information (e.g., age, sex, age, occupation, frequent traveler, and so on) of the user or other user. For example, the system may query such histories. Determining importance of a task may also be based, at least in part, on key words or terms in text 306. For example, “need” generally has implications of a required action, so that importance of a task may be relatively strong. On the other hand, in another example that involves a task of meeting a friend for tea, such an activity is generally optional, and such a task may thus be assigned a relatively low measure of importance. If such a task of meeting a friend is associated with a job (e.g., occupation) of the user, however, then such a task may be assigned a relatively high measure of importance. The system may weigh a number of such scenarios and factors to determine the importance of a task. For example, the system may determine importance of a task in a message based, at least in part, on content related to the electronic message; Paragraph 0058, The task operations module may analyze the content to determine one or more meanings of the content. Analyzing content may be performed by any of a number of techniques to determine meanings of elements of the content, such as words, phrases, sentences, metadata (e.g., size of emails, date created, and so on), images, and how and if such elements are interrelated, for example. “Meaning” of content may be how one would interpret the content in a natural language. For example, the meaning of content may include a request for a person to perform a task. In another example, the meaning of content may include a description of the task, a time by when the task should be completed, background information about the task, and so on. In another example, the meaning of content may include properties of desired action(s) or task(s) that may be extracted or inferred based, at least in part, on a learned model; Paragraph 0068, FIG. 6 is a block diagram of a machine learning model 600, according to various examples. Machine learning model 600 may be the same as or similar to machine learning model 502 shown in FIG. 5. Machine learning model 600 includes any of a number of functional blocks, such as random forest block 602, support vector machine block 604, and graphical models block 606. Random forest block 602 may include an ensemble learning method for classification that operates by constructing decision trees at training time. Random forest block 602 may output the class that is the mode of the classes output by individual trees, for example; As stated in Paragraph 0223 of Applicant’s specification, task summary may include a reminder or notification regarding a task. Examiner interprets the reminders or to-do list based on the priority of the task (e.g., high priority) as the task summary data that includes a subset of the tasks. Also, Examiner interprets high priority as the threshold).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify wherein the priority is determined using a classifier of the invention of Jothilingam et al. because doing so would allow the method to use a priority of the task for subsequent operations such as prioritizing tasks, reminders, revisions of to-do lists, appointments, meeting requests, and other time management activities (see Jothilingam et al., Paragraph 0052). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Sim et al. and Jothilingam et al. discloses using a machine learning model to generate a priority score associated with the tasks based on task parameters (e.g., prioritize tasks as high or low priority based on previous user actions, learned preferences, and/or task deadline). Although Examiner interprets tasks with high priority as the threshold, the combination of Sim et al. and Jothilingam et al. does not specifically disclose wherein the subset of the tasks includes priority scores that exceed a task threshold.
However, Fang et al. discloses wherein the subset of the tasks includes priority scores that exceed a task threshold (Figure 6, item 602, Server; Paragraph 0121, Additionally, the fashion recommendation system can provide one or more of the ranked items to a user. For example, the fashion recommendation system selects a threshold number of top items to present to a user via a client device associated with the user. In another example, the fashion recommendation system provides ranked items to a user that are above a threshold preference prediction score. As described above, the fashion recommendation system can provide one or more ranked items (e.g., personalized items) to the user upon the user's request or in response to a user's interaction with related items).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks (e.g., based on a learned priority of the task) of the invention of Sim et al. and Jothilingam et al. to further specify a threshold used to generate the subset of tasks of the invention of Fang et al. because doing so would allow the method to provide ranked items to a user that are above a threshold preference prediction score (see Fang et al., Paragraph 0121). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Although Sim et al. discloses automatically booking/purchasing a service and/or inputting a credit card number (see Paragraph 0032 & 0038), the combination of Sim et al., Jothilingam et al., and Fang et al. does not specifically disclose wherein the sensitive information provided by the user is obscured (e.g., obscuring the credit card number).
However, Raleigh et al. discloses establishing, by the task-facilitation server, a secure communication channel between an … agent and the user after receiving the interaction data; obscuring, by the task-facilitation server, a portion of the user model that corresponds to user-sensitive data (Paragraph 0701, FIG. 141 illustrates a representative screen 10730 that details a particular payment means (e.g., credit card information). The user of the mobile wireless communication device 100 can input, review and update information related to the particular payment means through the UI 101 of the mobile wireless communication device 100. Some sensitive information, e.g., portions of or all digits of a credit card number, security codes, and expiration dates, can be obscured when presented through the UI 101 to provide added security);
and transmitting, by the task-facilitation server, the obscured user model and progress-status data of the task, wherein as the progress-status data and the obscured user model are received, the secure communication channel dynamically generates status messages associated with the task in real-time (Paragraph 0491, User approval can be acquired, for example, by a simple click operation or require a secure password, key and/or biometric response from the user. Upon user approval, the billing agent 1695 generates a billing approval and sends it to the transaction server 134, the transaction server 134 completes the transaction and then sends a bill to the billing agent 1695. The billing agent 1695 optionally sends a confirmation to the transaction server 134 and sends the bill to the billing server 4630).
It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method comprising using a machine learning for processing task parameters to generate a subset of the tasks based on priority scores (e.g., wherein one of the tasks may include a transaction for booking/purchasing a ticket) of the invention of Sim et al. and Fang et al. to further specify wherein the transaction is performed in a secure communication channel between an agent and the user of the invention of Raleigh et al. because doing so would allow the method to obscure portions of or all digits of a credit card number when presented though the user interface, which provides an extra layer of security (see Raleigh et al., Paragraph 0701). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claims 2, 12, and 17 (Currently Amended), which are dependent of claims 1, 11, and 16 the combination of Sim et al., Jothilingam et al., Fang et al., and Raleigh et al. discloses all the limitations in claims 1, 11, and 16. Sim et al. further discloses wherein the task summary includes an additional task requiring a task parameter value from the user (Figure 5, item 510, info needed; Paragraph 0045, Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task. The user may select the “Yes” hyperlink to see what information is needed and to provide the necessary information so that the hub may complete the subtask), the computer-implemented method further comprising (Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure):
receiving an indication from the computing device including the task parameter value (Figure 5, item 510, info needed; Paragraph 0027, If the task is to schedule a wedding reception, the subtasks might be choose a date, book a venue, book a caterer, order flowers, order cake, and book a photographer. The state of each subtask 214 indicates how much of the subtask has been completed. For example, states may include not started, need information, in progress, waiting on response, or complete. While these are examples of various subtask states, they should not be considered limiting. In the wedding reception example, a date might not be able to be selected until it is confirmed that a preferred venue is available. In examples, this dependency may be inputted directly from the user by specifying the preferred venue. The empty slots 218 comprise all of the information from subtask definitions that are missing—that is, the information that was not fed into the input 204 from the information 206 and 208. In the second example, the general location (city, state) might be needed for the wedding reception task; Paragraph 0045, Subtask C is not started, is owned by the hub meaning that the hub will perform it automatically, but that the hub needs information to complete this task. The user may select the “Yes” hyperlink to see what information is needed and to provide the necessary information so that the hub may complete the subtask; As stated in Paragraph 0211 of Applicant’s specification, a task parameter value may be specific information about member 118, such as information regarding preferences. Thus, Examiner interprets “input directly from the user specifying the preferred venue” as the “task parameter value”);
generating updated task data by updating the task data according to the task parameter value (Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
generating updated task summary data using the user model and the updated task data (Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
and transmitting the updated task summary data, wherein, when received by the computing device, the computing device updates the interface to present an updated task summary based on the updated task summary data (Paragraph 0025, The task hub 102 also provides managed views of the task to users with different assigned roles in the task (task owner, step executor, observer, etc.) through UI 124, which is provided for display at user device 106; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask; Paragraph 0047, When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions).
Regarding claims 3, 21, and 25 (Currently Amended), which are dependent of claims 1, 11, and 16 the combination of Sim et al., Jothilingam et al., Fang et al., and Raleigh et al. discloses all the limitations in claims 1, 11, and 16. Sim et al. further discloses wherein the task summary includes an additional task having a progress status (Figure 5, item 506, Status; Paragraph 0002, The AI-based assistance may indicate which steps can be completed by automated processes and suggest resources to assist in the completion of other steps. The hub displays the current status of the task, and lives until the completion of the task, or abandonment by the user), the computer-implemented method further comprising (Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure):
generating updated task data by updating the task data according to a change in the progress status (Paragraph 0027, The state of each subtask 214 indicates how much of the subtask has been completed. For example, states may include not started, need information, in progress, waiting on response, or complete; Paragraph 0045, The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask. In aspects the task hub keeps track of user actions to keep the state of the tasks up to date. For example, the user might keep the task hub informed as to status by copying the task hub on emails sent in performance of a subtask. Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
generating updated task summary data using the user model and the updated task data (Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
and transmitting the updated task summary data, wherein, when received by the computing device, the computing device updates the interface to present an updated task summary based on the updated task summary data (Paragraph 0025, The task hub 102 also provides managed views of the task to users with different assigned roles in the task (task owner, step executor, observer, etc.) through UI 124, which is provided for display at user device 106; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask; Paragraph 0047, When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions).
Regarding claims 4, 22, and 26 (Currently Amended), which are dependent of claims 1, 11, and 16 the combination of Sim et al., Jothilingam et al., Fang et al., and Raleigh et al. discloses all the limitations in claims 1, 11, and 16. Sim et al. further discloses wherein the task summary includes an additional task associated with a reminder (Paragraph 0020, The task agent responsible for managing or orchestrating subtasks of an active task may in turn call subtask agents responsible for completing aspects of subtasks. The subtask agents may be short-lived (e.g., to dispatch a notification), or remain instantiated until some action is taken or requirement is satisfied), the computer-implemented method further comprising (Paragraph 0035, FIG. 3 is a communication flow illustrating a method of semi-autonomously managing a subtask in accordance with aspects of the present disclosure):
receiving a response to the reminder from the computing device (Paragraph 0037, In aspects, the task hub 304 may need information from the user 302 to complete the subtask A. The task hub 304 issues an information request 318 to the user 302 to complete subtask A. In examples, the task hub may issue the request by sending a message to the user. Alternatively or additionally, the task hub 304 may flag the missing information so that the user sees it when the user accesses the task hub 304. The task hub 304 may already have known about this information needed from the task hub model 313 output. Additionally or alternatively, the task hub 304 might have determined it needed this information to automate the task from the information it received in information response 316. For example, the task hub 304 might have known it needed the departure and return dates for the airfare from the user 302 and send an information request 318 to the user for this information. The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice);
generating updated task data by updating the task data according to the response to the reminder (Paragraph 0047, The intelligent task hub provides many technical benefits. The system promotes interaction of disparate systems thereby automating task completion across disparate systems and data sources. Further, it increases user efficiency by automating some tasks that users previously needed to perform manually. It also increases user efficiency by keeping track of and managing the various states of subtasks in complex tasks, and presenting or alerting to users only the aspects of a task or subtask that require the user's attention. The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
generating updated task summary data using the user model and the updated task data (Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
and transmitting the updated task summary data, wherein, when received by the computing device, the computing device updates the interface to present an updated task summary based on the updated task summary data (Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions).
Regarding claims 5, 23, and 27 (Original), which are dependent of claims 1, 11, and 16 the combination of Sim et al., Jothilingam et al., Fang et al., and Raleigh et al. discloses all the limitations in claims 1, 11, and 16. Sim et al. further comprising transmitting updated task summary data, wherein, when received by the computing device, the computing device updates the interface to present an updated task summary based on the updated task summary data (Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure; Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions).
Regarding claims 6, 13, and 18 (Original), which are dependent of claims 1, 11, and 16, the combination of Sim et al., Jothilingam et al., Fang et al., and Raleigh et al. discloses all the limitations in claims 1, 11, and 16. Sim et al. further comprising: identifying a change to the user model resulting in an updated user model (Paragraph 0026, Information 206 might come directly from the user, such as user 104 in FIG. 1, or from the user's information, such as his or her accounts (e.g., contacts, calendar, e-mail); Paragraph 0037, The user provides this information 320 to the task hub 304. In the wedding reception example, the task hub 304 may receive as information response 316 a list of popular flowers for a wedding reception in April (e.g., tulips, peonies, freesia) and at that point determine it needs to ask the user which flowers he or she would like. In this case, the task hub 304 will send an information request 318 to the user 302 asking the user to choose the type of flower and the user will issue a response 320 to the task hub 304 with his or her choice; Paragraph 0042, Initially, subtasks that can be automated may be completed by a user humans to produce a training set for a machine to learn to imitate perform user actions or preferences; Examiner interprets “identifying a change to the user model” as “identifying a change in preference.” In this case, the user specifies which flowers he or she prefers);
generating updated task summary data using the updated user model and the task data (Paragraph 0046, The user may re-run the inputs through the task hub model by selecting the refresh UI control 518. In aspects, the UI hub re-runs the inputs through the Task hub model any time any information is added or changed for any task; Paragraph 0047, The task hub user interface allows multiple users to keep track of the status of subtasks in one place, which provides an improved user experience particularly for complicated tasks. Furthermore, the task hub archives the details of completed subtasks, which can be a useful reference for completing future tasks (e.g. by recalling which contractor completed some related subtask), or for other historical purposes, such as accounting or auditing. When deployed across many users, the intelligent task hub can learn from user behavior to better manage, prioritize, and execute common subtasks, e.g., by updating its model periodically based on user actions);
and transmitting the updated task summary data, wherein, when received by the computing device, the computing device updates the interface to present an updated task summary based on the updated task summary data (Paragraph 0025, The task hub 102 also provides managed views of the task to users with different assigned roles in the task (task owner, step executor, observer, etc.) through UI 124, which is provided for display at user device 106; Paragraph 0045, FIG. 5 illustrates an example user interface 500 for the task hub in accordance with aspects of the present disclosure. Window or pane 502 relates to the first Task 1. Other tasks may be viewed by selecting UI control for the Active Task List 520. Completed or abandoned tasks may be viewed by selecting UI control 522 for Task Archives. The current task pane 502 contains a list of subtasks 504 that are part of the current task, the state or status 506 of each subtask, the due date or completion date of each subtask 507, the owner 508 of each subtask, and whether any information is needed 510 for each subtask).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Milligan et al. (US 2017/0180294 A1) – discloses using a machine learning to learn user preferences over time (Paragraph 0087, For example, the model builder 224 learns a user's favorite chocolate brand from the user's purchase history. Based on what was learned by the model, a suggestion can be made for a user. For example, when a user talks about flying to Florida for a vacation in a conversation, a suggestion of flight itineraries (e.g., a time and airline) can be made to the user since the model builder 224 learns that the user likes this airline and often takes the flights around that time from previous actions of users. In some instances, the model builder 224 receives a user reaction to a suggestion provided to the user in a conversation and in turn uses the reaction (and other data from the conversation) as a training signal to train the model (e.g., uses the reaction to generate a training example that is used to refine the model through further training). Continuing with the above example, if the user drops the flight itinerary suggestion or explicitly states a dislike of the airline (e.g., by writing “I do not like this airline”) in a conversation, the model builder 224 may take this negative reaction as a signal to train the model. As a result, a different airline may be suggested to the user in the future).
Wang (US 2022/0398547 A1) – generating a priority score using an artificial intelligence model (Paragraph 0047, FIG. 2C illustrates an exemplary interface 200c associated with the todolist view 220, in accordance with an embodiment of the present teaching. In this illustration for the todolist view 220, the corresponding dashboard display area 240c that is used to display tasks in a to-do list. In some embodiments, each task in this to-do list may be associated with a priority score, which may be either specified by a user or predicted by an AI-based model, when the user does not assign a priority score manually. In some embodiments, tasks on a to do list may be ranked based on their priority scores and they may be displayed according to that rank order (as illustrated in FIG. 2C). As discussed herein, tasks under different views are consistently maintained; Paragraph 0061, The model training may be carried out using any type of supervised machine learning approach, including but not limited to, logistic regression, naive bayes, support vector machine, decision tree, adaboost, gradient boosting machine, random forest, neural networks, and any type of deep learning architectures, etc., either existing or developed in the future; See provisional application # 62/208,706, filed on 06/09/21, Pages 4-6).
Yusuf et al. (US 2023/0036167 A1) – discloses a queue of tasks may be displayed with indicators showing priority of the tasks based on the deadlines. For instance, the tasks may be color-coded to indicate imminent deadlines, overdue and the like. In the illustrated example, tasks that are overdue 2103 may be highlighted in ‘red’ color and displayed with a warning icon, tasks that are due within 1 hour may be highlighted in ‘red’ 2105, tasks that are due within the next 24 hours may be highlighted in ‘orange’ color 2107 and tasks that are not due within 24 hours may not be highlighted 2109 (e.g., in ‘white’ color). The hours until (or hours passed since) the deadline are also displayed with the items in the queue. This beneficially allows for automatically optimizing the order of the tasks for a user (see at least Paragraph 0128).
최형탁 (KR 102047500 B1) – discloses the device 1000 may determine a main task based on the meaning of the text, and determine a subtask related to the determined main task. For example, the device 1000 determines a main task of “preparing a 5/5 Jeju Island trip” based on the text, and selects “5/5 character flight ticket booking”, “find the destination”, “rental car rental”, and “purchase of supplies” and the like to determine a sub task for preparing for a trip. In this case, a list of subtasks related to the main task may be set in advance, and the device 1000 may determine a subtask by selecting some subtasks from a list of preset subtasks. The list of subtasks may be set using a user's behavior pattern. For example, information such as a transportation preferred by the user, a rental car company preferred by the user, and a travel theme preferred by the user may be used to determine a sub task (see at least Page 7).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.P./ Examiner, Art Unit 3624
/HAMZEH OBAID/ Primary Examiner, Art Unit 3624