Prosecution Insights
Last updated: April 19, 2026
Application No. 17/823,662

AUTOMATED COGNITIVE LOAD-BASED TASK THROTTLING

Final Rejection §101§103
Filed
Aug 31, 2022
Examiner
HO, THOMAS Y
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yohana LLC
OA Round
2 (Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
27 granted / 175 resolved
-36.6% vs TC avg
Strong +32% interview lift
Without
With
+31.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
46 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
35.3%
-4.7% vs TC avg
§103
41.8%
+1.8% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of the Claims The pending claims in the present application are claims 1, 4-8, 11-15, and 18-27 of the Amendment filed 03 December 2025. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-8, 11-15, and 18-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The paragraphs below provide rationales for the rejection. The rationales are based on the multi-step subject matter eligibility test outlined in MPEP 2106. Step 1 of the eligibility analysis involves determining whether a claim falls within one of the four enumerated categories of patentable subject matter recited in 35 USC 101. (See MPEP 2106.03(I).) That is, Step 1 asks whether a claim is to a process, machine, manufacture, or composition of matter. (See MPEP 2106.03(II).) Referring to the pending claims, the “method” of claims 1, 4-7, 22 and 23 constitutes a process under 35 USC 101, the “system” of claims 8, 11-14, 24 and 25 constitutes a machine under the statute, and the “non-transitory, computer-readable storage medium” of claims 15, 18-21, 26, and 27 constitutes a manufacture under the statute. Accordingly, claims 1, 4-8, 11-15, and 18-27 meet the criteria of Step 1 of the eligibility analysis. The claims, however, fail to meet the criteria of subsequent steps of the eligibility analysis, as explained in the paragraphs below. The next step of the eligibility analysis, Step 2A, involves determining whether a claim is directed to a judicial exception. (See MPEP 2106.04(II).) This step asks whether a claim is directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea. (See id.) Step 2A is a two-prong inquiry. (See MPEP 2106.04(II)(A).) Prong One and Prong Two are addressed below. In the context of Step 2A of the eligibility analysis, Prong One asks whether a claim recites an abstract idea, law of nature, or natural phenomenon. (See MPEP 2106.04(II)(A)(1).) Using independent claim 1 as an example, the claim recites the following abstract idea limitations: “A ... method, comprising: ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes “... processing a set of messages ... to detect a cognitive load change associated with a member, wherein the set of messages is exchanged between the member and a representative during an ongoing communications session; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes “... processing a set of active tasks associated with the member ... to determine a set of cognitive load contributions for the set of active tasks, wherein the set of active tasks is processed as a result of ... the cognitive load change, ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes “... determining that a cognitive load associated with the member exceeds a threshold, wherein the cognitive load is calculated by aggregating the set of cognitive load contributions; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes “... dynamically migrating one or more active tasks to a suspended state according to the set of cognitive load contributions and a profile corresponding to the member; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes The above-listed limitations of independent claim 1, when applying their broadest reasonable interpretations in light of their context in the claim as a whole, fall under enumerated groupings of abstract ideas outlined in MPEP 2106.04(a). For example, limitations of the claim can be characterized as: managing personal behavior or relationships or interactions between people, by managing messaging between members and representatives, and tasks performed by representatives on behalf of members, which fall under the certain methods of organizing human activity grouping of abstract ideas (see MPEP 2106.04(a)). Limitations of the claim also can be characterized as: concepts performed in the human mind, including evaluation, judgment, and/or opinion (e.g., the recited “processing,” “determining,” and “migrating” limitations, which fall under the mental processes grouping of abstract ideas (see MPEP 2106.04(a)). Accordingly, for at least these reasons, claim 1 fails to meet the criteria of Step 2A, Prong One of the eligibility analysis. In the context of Step 2A of the eligibility analysis, Prong Two asks if the claim recites additional elements that integrate the judicial exception into a practical application. (See MPEP 2106.04(II)(A)(2).) Continuing to use independent claim 1 as an example, the claim recites the following additional element limitations: The claimed “method” is “computer-implemented” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) The claimed “processing” is “through a message processing machine learning algorithm” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) The claimed “processing” is “through an active task processing machine learning algorithm” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) “... a signal from the message processing machine learning algorithm indicating ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) “... wherein the active task processing machine learning algorithm is trained using a dataset corresponding to sample cognitive load contributions for different tasks associated with different members ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) “... wherein the first data model comprises a classification artificial intelligence (AI) model, ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) “... dynamically updating an active task interface associated with the member to remove a graphical representation of the one or more active tasks migrated to the suspended state; and ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) “... updating the active task processing machine learning algorithm, wherein the active task processing machine learning algorithm is updated using the profile, the cognitive load, and an updated cognitive load resulting from migration of the one or more active tasks to the suspended state ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h) The above-listed additional element limitations of independent claim 1, when applying their broadest reasonable interpretations in light of their context in the claim as a whole, are analogous to: accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, mere automation of manual processes, instructions to display two sets of information on a computer display in a non-interfering manner, without any limitations specifying how to achieve the desired result, and arranging transactional information on a graphical user interface in a manner that assists traders in processing information more quickly, which courts have indicated may not be sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)(I)); a commonplace business method being applied on a general purpose computer, gathering and analyzing information using conventional techniques and displaying the result, and selecting a particular generic function for computer hardware to perform from within a range of fundamental or commonplace functions performed by the hardware, which courts have indicated may not be sufficient to show an improvement to technology (see MPEP 2106.05(a)(II)); a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions, and merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions, which do not qualify as a particular machine or use thereof (see MPEP 2106.05(b)(I)); a machine that is merely an object on which the method operates, which does not integrate the exception into a practical application (see MPEP 2106.05(b)(II)); use of a machine that contributes only nominally or insignificantly to the execution of the claimed method, which does not integrate a judicial exception (see MPEP 2106.05(b)(III)); transformation of an intangible concept such as a contractual obligation or mental judgment, which is not likely to provide significantly more (see MPEP 2106.05(c)); recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, remotely accessing user-specific information through a mobile interface and pointers to retrieve the information without any description of how the mobile interface and pointers accomplish the result of retrieving previously inaccessible information, which courts have found to be mere instructions to apply an exception, because they recite no more than an idea of a solution or outcome (see MPEP 2106.05(f)); use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea, a commonplace business method or mathematical algorithm being applied on a general purpose computer, generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components, and requiring the use of software to tailor information and provide it to the user on a generic computer, which courts have found to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process (see MPEP 2106.05(f)); mere data gathering in the form of obtaining information about transactions using the Internet to verify transactions and consulting and updating an activity log, and selecting a particular data source or type of data to be manipulated in the form of selecting information, based on types of information and availability of information in an environment, for collection, analysis, and display, which courts have found to be insignificant extra-solution activity (see MPEP 2106.05(g)); and specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer, which courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). For at least these reasons, claim 1 fails to meet the criteria of Step 2A, Prong Two of the eligibility analysis. The next step of the eligibility analysis, Step 2B, asks whether a claim recites additional elements that amount to significantly more than the judicial exception. (See MPEP 2106.05(II).) The step involves identifying whether there are any additional elements in the claim beyond the judicial exceptions, and evaluating those additional elements individually and in combination to determine whether they contribute an inventive concept. (See id.) The ineligibility rationales applied at Step 2A, Prong Two, also apply to Step 2B. (See id.) For all of the reasons covered in the analysis performed at Step 2A, Prong Two, independent claim 1 fails to meet the criteria of Step 2B. Further, claim 1 also fails to meet the criteria of Step 2B because at least some of the additional elements are analogous to: receiving or transmitting data over a network, e.g., using the Internet to gather data, performing repetitive calculations, and electronic recordkeeping, which courts have recognized as well-understood, routine, conventional activity, and as insignificant extra-solution activity (see MPEP 2106.05(d)(II)). As a result, claim 1 is rejected under 35 USC 101 as ineligible for patenting. Regarding claims 4-7, 22, and 23, the claims depend from independent claim 1, and expand upon limitations introduced by claim 1. The dependent claims are rejected at least for the same reasons as claim 1. For example, the dependent claims recite abstract idea elements similar to the abstract idea elements of claim 1, that fall under the same abstract idea groupings as the abstract idea elements of claim 1 (e.g., the “further comprising: detecting completion of one or more remaining active tasks of the set of active tasks; identifying a new cognitive load resulting from the completion of the one or more remaining active tasks; and migrating an active task from the suspended state to an active state, wherein the active task is migrated to the active state based on the new cognitive load” of claim 4, the “further comprising: dynamically generating one or more prompts for approval to migrate the one or more active tasks to the suspended state; and ... wherein when the approval is obtained, the one or more active tasks are migrated to the suspended state” of claim 5, the “further comprising: ... present a notification in response to determining that the cognitive load associated with the member exceeds the threshold, wherein the notification includes an indication of the one or more active tasks selected for migration” of claim 6, the “wherein the one or more active tasks are selected based on a determination that a level of urgency for completion of the one or more active tasks is less than other active tasks of the set of active tasks” of claim 7, and the “wherein: the one or more active tasks are associated with corresponding deadlines for completion of the one or more active tasks; and the ... method further comprises: ... processing a set of remaining tasks ... to determine cognitive load contributions for the set of remaining tasks; and migrating an active task from the one or more active tasks to an active state based on the corresponding deadlines and as a result of a combination of the cognitive load contributions and a cognitive load contribution associated with the active task not exceeding the threshold” of claim 23). The dependent claims recite further additional elements that are similar to the additional elements of claim 1, that fail to warrant eligibility for the same reasons as the additional elements of claim 1 (e.g., “computer-implemented” of claims 4-7, 22, and 23; the “updating the active task interface to present the one or more prompts” of claim 5, the “updating a representative console” of claim 6, the “updating a pending task interface to add the graphical representation of the one or more active tasks migrated to the suspended state, wherein the pending task interface and the active task interface are distinct” of claim 22, and the “automatically ... through the active task processing machine learning algorithm” of claim 23). Accordingly, claims 4-7, 22, and 23 also are rejected as ineligible under 35 USC 101. Regarding claims 8, 11-14, 24 and 25, while the claims are of different scope relative to claims 1, 4-7, 22, and 23, the claims recite limitations similar to the limitations of claims 1, 4-7, 22, and 23. As such, the rejection rationales applied to reject claims 1, 4-7, 22, and 23 also apply for purposes of rejecting claims 8, 11-14, 24 and 25. Limitations recited by claims 8, 11-14, 24 and 25 that do not have a counterpart in claims 1, 4-7, 22, and 23, such as the recited “system, comprising: one or more processors; and memory storing thereon instructions that, as a result of being executed by the one or more processors, cause the system to” limitations of independent claim 8, fail to warrant a finding of eligibility, because such limitations amount to additional elements that fail to meet the criteria of Step 2A, Prong Two and Step 2B, for the same reasons as the additional elements of claims 1, 4-7, 22, and 23. Claims 8, 11-14, 24, and 25 are, therefore, also rejected as ineligible under 35 USC 101. Regarding pending claims 15, 18-21, 26, and 27, while the claims are of different scope relative to claims 1, 4-7, 8, 11-14, and 22-25, the claims recite limitations similar to the limitations of claims 1, 4-7, 8, 11-14, and 22-25. As such, the rejection rationales applied to reject claims 1, 4-7, 8, 11-14, and 22-25 also apply for purposes of rejecting claims 15, 18-21, 26, and 27. Claims 15, 18-21, 26, and 27 are, therefore, also rejected as ineligible under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4-8, 11-15, 18-21, 23, 25, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. No. 11,538,351 B1 to Bruni et al. (hereinafter referred to as “Bruni”), in view of WIPO Int’l Pub. No. 2010/037163 A1 to Chen et al. (hereinafter referred to as “Chen”), and further in view of EP Pat. App. Pub. No. 2 503 780 A1 to Moore (hereinafter referred to as “Moore”). Regarding independent claim 1, Bruni discloses the following limitations: “A computer-implemented method, comprising: ...” - Bruni discloses, “In one example embodiment, a processor based automated system for use in a human-machine team is disclosed, the processor based automated system comprising an input sensor and a cognitive assessment input system (CAIS) wherein the processor based automated system is specifically configured for use in a human-machine team and the automated system is configured to receive and execute automation directives from the CAIS as an input source to the processor based automated system” (col. 4, ll. 1-9). Operation of the system, in Bruni, reads on the recited limitation. The combination of Bruni and Chen (hereinafter referred to as “Bruni/Chen”) teaches limitations below of independent claim 1 that do not appear to be disclosed in their entirety by Bruni: “... processing a set of messages through a message processing machine learning algorithm to detect a cognitive load change associated with a member, wherein the set of messages is exchanged between the member and a representative during an ongoing communications session; ...” - Bruni discloses, “The terms "sensor data", as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art (and are not to be limited to a special or customized meaning), and furthermore refers without limitation to any data associated with a device or sensor, such as audio, video, location, body motion, movement or displacement, digital communication (e.g., chat messages, emails, VOIP), computer interactions (e.g., mouse and keyboard dynamics, computer program or browser activities), or physiological data ( e.g., brain waves, pupillary behavior, galvanic skin response, heart rate or pulse)” (col. 6, ll. 4-11); “The resulting technical solution enables the system to respond to the different cognitive states of the user with responses that improve the performance of the human-machine team” (col. 7, ll. 25-28); “Also, by quantifying the cognitive state of the user, the human-machine team may more easily accommodate other features of a human-machine system. For example, in some embodiments of human-machine systems, data (other than and including interaction data) can be monitored, and modeled according to workflow task patterns, and analyzed to determine the task(s) being performed or to predict the task(s) that will be or should be performed by the team. This task data may be able to have baseline, expected, and/or threshold cognitive states associated with them so that the actual, real-time cognitive states of the user can be compared to those baselines/expectations/thresholds to determine whether certain responses are recommended” (col. 7, ll. 39-51); and “As shown, a machine learning module at 469 may be included to take data gained from modules such as the cognitive indicators computation module, the cognitive indicators comparison module, the work pattern identification module, the work pattern comparison module, the work strategy identification module and the gap identification module to be used as input to a machine learning module at 469 to learn and update parameters in any of these modules” (col. 15, ll. 15-22). Processing chat messages using the machine learning module to compute cognitive indicators of humans, in Bruni, reads on the recited “processing a set of messages through a message processing machine learning algorithm to detect a cognitive load change associated with a member” limitation. While Bruni suggests human-to-human communications (e.g., chat messages, emails, etc.), such aspects of Bruni lack details. Chen discloses, “In a first aspect the invention provides a method for measuring a person's cognitive load, the method comprising the steps of: (a) receiving word based input produced by a person while performing a task; (b) identifying predetermined grammatical features of the word based input; and ( c) weighting and combining the identified grammatical features to provide a measure indicating the person's cognitive load” (p. 2), “The word based input of step (a) may be text input that is typed by the person when performing the task” (p. 3), “the interface can be part of a computer assisted human-human interaction system” (pp. 5 and 6), “The interaction system 202 which is comprised of system elements used to assist in the task. For example the hardware may be a computer having a display means such as a monitor that displays an interface. The interface can display a video conference screen of the teacher during a tutorial or questions that form a test. A receiver is also provided to receive word based input produced by a person while performing the task. For example, input means such as a keyboard to type answers to the question and a mouse to navigate the interface” (p. 6), and “Depending on the proximity of this measure to an optimized pre-set cognitive load target level for that task, the next task or system output or response is verified for appropriateness or changed. For example, in the case of a distance education tutorial, if the cognitive load is too low, this feedback may be provided in real time to the tutor who can then accelerate the progress of the tutorial. If the cognitive load is too high, the interface can be automatically programmed to minimise all open applications displayed to the user (i.e. graphs on display) and to only show the video display of the tutor” (p. 10). The word-based inputs associated with the person and the other person (e.g., the receiver or tutor) during the tutorial, in Chen, reads on the recited “wherein the set of messages is exchanged between the member and a representative during an ongoing communications session” limitation. “... processing a set of active tasks associated with the member through an active task processing machine learning algorithm to determine a set of cognitive load contributions for the set of active tasks, wherein the set of active tasks is processed as a result of a signal from the message processing machine learning algorithm indicating the cognitive load change, and wherein the active task processing machine learning algorithm is trained using a dataset corresponding to sample cognitive load contributions for different tasks associated with different members; ...” - See the aspects of Bruni and Chen that have been referenced above. Bruni also discloses,” Examples of updating module parameters with machine learning techniques may include updating task identification parameters in the work pattern identification module by learning associations between series of workstation interactions. Also, workflow identification parameters in the work strategy identification module may be updated by learning associations between tasks. Also, cognitive parameters like attentional focus in the cognitive indicators comparison module may be updated according to learned associations with specific tasks ( e.g., task A tends to involve long spans of time on the same document while task B tends to involve collating data from a variety of documents and websites into a summary report)” (col. 15, ll. 40-52); and “The InfoCog work pattern identification module has been implemented for a sample user with a pre-defined set of tasks within a "mission." The underlying task recognition model was trained with labeled data; model testing revealed that task recognition was 85% accurate (i.e., it identified the correct task from 13 possible tasks 85% of the time). The model was trained on features generated from user-workstation interactions ( e.g., document/site name, program type, interaction type, text typed or moused-over) and the list of tasks in the mission with Latent Dirichlet Allocation (LDA) Natural Language Processing (NLP) methods and K-Nearest Neighbor classifiers. The methods used for creating the InfoCog work pattern indicator module can be replicated for any user or set of tasks as long as labeled training data and a list of tasks is provided” (col. 23, ll. 1-15). Computing cognitive indicators associated with tasks performed by the human using the machine learning modules, including the work pattern identification module, to determine cognitive indicators associated with performing the tasks, wherein the tasks are considered due to receiving inputs that the machine learning indicates as requiring cognition, wherein the model is trained on inputs corresponding to cognition for tasks performed by people, in Bruni, reads on the recited “processing a set of active tasks associated with the member through an active task processing machine learning algorithm to determine a set of cognitive load contributions for the set of active tasks, wherein the set of active tasks is processed as a result of a signal ... machine learning algorithm indicating the cognitive load change, and wherein the active task processing machine learning algorithm is trained using a dataset corresponding to sample cognitive load contributions for different tasks associated with different members” limitation. Chen also discloses, “Using Machine Learning, certain features could be weighted more heavily for particular users for particular task type, for example, if the machine learning indicates that these features provide better clues for the user's cognitive load fluctuations” (p. 12). The considering of text communications between humans by the machine learning, in Chen, reads on the recited “from the message processing machine learning algorithm” limitation. “... determining that a cognitive load associated with the member exceeds a threshold, wherein the cognitive load is calculated by aggregating the set of cognitive load contributions; ...” - See the aspects of Bruni and Chen that have been referenced above. Bruni also discloses, “multiple cognitive indicators may be considered in conjunction with one another” (col. 13, ll. 17-19); and “7. The processor based input system of claim 6 wherein if the cognitive measure exceeds a pre-defined threshold, the automation directive from the CAIS is an instruction for the processor based automated system to perform a task” (col. 27, ll. 37-40). Determining that the cognitive measure exceeds the pre-defined threshold, wherein the cognitive measure involves considering multiple cognitive indicators in conjunction with one another, in Bruni, reads on the recited limitation. “... dynamically migrating one or more active tasks to a suspended state according to the set of cognitive load contributions and a profile corresponding to the member; ...” - See the aspects of Bruni that have been referenced above. Bruni also discloses, “tailoring the prioritization of tasks for the user, queueing, presenting or prioritizing different information to the user, presenting the same information to the user but in a different format, issuing alerts, alarms or warnings to the user or to other entities in the system, pre-computing potential courses-of-actions that may be subsequently relevant to the user, or offloading some tasks through automation by the machine or through handoff to other team members (human or not)” (col. 7, ll. 30-38). Putting tasks in queue or handing them off, based on the cognitive measure and user characteristics, in Bruni, reads on the recited limitation. Chen discloses “measuring cognitive load” (Abstract), similar to the claimed invention and to Bruni. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the computing of cognitive indicators, of Bruni, to include consideration of text communications between people, as in Chen, because concentrating on grammatical features can provide an objective, non-intrusive measure of a person’s cognitive load, per Chen (see Abstract). The combination of Bruni, Chen, and Moore (hereinafter referred to as “Bruni/Chen/Moore”) teaches limitations below of independent claim 1 that do not appear to be taught in their entirety by Bruni/Chen: “... dynamically updating an active task interface associated with the member to remove a graphical representation of the one or more active tasks migrated to the suspended state; and ...” - Bruni discloses, “in a military intelligence system like the DCGS, the automated system 110 may include interfaces that communicate graphic or textual data to the human operator 108 for the human operator 108 to act on or interpret” (col. 8, ll. 51-55); “the directives formatting module 588 establishes how a visual is to be displayed on the user's interface, based on the selected directives and the parameters transmitted by the directives computation module 584: the directives formatting module 588 would decide on what icons to use, what size font to display, what visual elements to include and where they should be positioned in the interface” (col. 17, ll. 8-16); “an automation directive 588 may be a blinking alert message in the top right corner with a warning icon that tells the user to read a specific document which they have missed” (col. 17, ll. 38-40); and “The directives formatting module is an online interpretation service that transforms the directives computation module's actionable message into visual interventions in the graphical user interface display employed by the user. This module leverages a user interface component library, which is made up of standard messages and icons, at various levels of formatting (varying sizes, fonts, colors, boldness, etc.). This module assembles an intervention in the form of a visual alert by selecting, from the library, those user interface components that satisfy the actionable message (level of intrusiveness, level of magnitude, and target task)” (col. 24, ll. 25-35). Displaying visuals on user interfaces, in Bruni, reads on the recited “dynamically updating an active task interface associated with the member” limitation. Tasks being put in queue or handed off, in Bruni, reads on the recited “one or more active tasks migrated to the suspended state” limitation. Bruni does not, however, appear to connect the two concepts using the interface. Moore discloses, “If however at the step 310 it is detected that the user is attending to the previous notification, then at a step 340 a detection is made as to whether the new notification is already in a queue maintained (in the memory 170) by the notification controller 210. If the notification is already in the queue, then the process ends. If not, then the new notification is added to the queue at a step 350 and the process ends. In this way, the notification controller 210 can be configured to defer any further notifications while the user is attending to a current notification using the user interface, until the user has finished attending to the current notification. The notification controller can achieve this by generating and/or maintaining a queue comprising any further notifications while the user is attending to a current notification using the user interface, and by providing notifications relating to the queue once the user has finished attending to the current notification” (para. [0046]), and “In the case of social networking chat communication, then if the user is already attending to a notification about a communication from one Friend, the notification relating to another Friend could be allowed but in the form of "do you want to allow [Friend B] to join this chat session". If the user's answer is "no" then the notification is returned to the queue, and is later (when the user has finished attending to the current notification) notified, but in the context of a request by Friend B for a one to one communication with the user” (para. [0065]). Removing the notification from the user interface and putting the notification back into the queue, in Moore, reads on the recited “remove a graphical representation of the one or more active tasks migrated to the suspended state” limitation. “... updating the active task processing machine learning algorithm, wherein the active task processing machine learning algorithm is updated using the profile, the cognitive load, and an updated cognitive load resulting from migration of the one or more active tasks to the suspended state.” - See the aspects of Bruni that have been referenced above. Bruni also discloses, “supervised learning techniques (e.g., classification (support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Naive Bayes, discriminant analysis, logistic regression, and neural networks)” (col. 15, ll. 24-28). Use of supervised learning as the form of machine learning, involving optimizing the machine learning by making modifications to parameters based on cognition-indicating inputs, computations of cognitive states, and associated ground truths, as in Bruni, reads on the recited limitation. Moore discloses user notification management (see title), similar to the claimed invention and to Bruni/Chen. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the queueing or handing off of task notifications, in Bruni/Chen, to include removing such tasks from interfaces, as in Moore, to avoid distracting users with additional notifications while ensuring that the additional notifications are later addressed, per Moore (see para. [0065]). Regarding claim 4, Bruni/Chen/Moore teaches the following limitations: “The computer-implemented method of claim 1, further comprising: detecting completion of one or more remaining active tasks of the set of active tasks; ...” - See the aspects of Bruni that have been referenced above. Determining that the user has completed tasks, and preparing to bring one of the queued tasks up for performing, in Bruni, reads on the recited limitation. “... identifying a new cognitive load resulting from the completion of the one or more remaining active tasks; and ...” - See the aspects of Bruni that have been referenced above. Computing cognitive indicators, assessments, and states, as tasks are completed and other tasks are started from the queued state, in Bruni, reads on the recited limitation. “... migrating an active task from the suspended state to an active state, wherein the active task is migrated to the active state based on the new cognitive load.” - See the aspects of Bruni that have been referenced above. Moving tasks from the queue so they can be performed, based on the cognitive indicators, assessments, and states being within thresholds, in Bruni, reads on the recited limitation. Regarding claim 5, Bruni/Chen/Moore teaches the following limitations: “The computer-implemented method of claim 1, further comprising: dynamically generating one or more prompts for approval to migrate the one or more active tasks to the suspended state; and ...” - See the aspect of Moore that have been references above. Displaying notifications requesting approval for moving tasks to queues, in Moore, reads on the recited limitation. “... updating the active task interface to present the one or more prompts, wherein when the approval is obtained, the one or more active tasks are migrated to the suspended state.” - See the aspects of Moore that have been referenced above. Displaying queries about permissibility of notifications, including approval for moving the notifications to the queue, and then putting the notifications in the queue based on user responses to the queries, in Moore, reads on the recited limitation. The rationales for combining the cited references, from the rejection of independent claim 1, also apply here. Regarding claim 6, Bruni/Chen/Moore teaches the following limitations: “The computer-implemented method of claim 1, further comprising: updating a representative console to present a notification in response to determining that the cognitive load associated with the member exceeds the threshold, wherein the notification includes an indication of the one or more active tasks selected for migration.” - See the aspects of Bruni and Moore that have been referenced above. Bruni also discloses, “a comparison would be a workload indicator value that has become higher than a reference value therefore yielding the likelihood of cognitive state overload for the user” (col. 13, ll. 10-14). Presenting a latter notification on the interface while the user is addressing a prior notification, wherein the latter notification indicates a task that can be moved to a queue, in Moore, when performed following cognitive indicator, state, and measure analyses establishing that cognitive overload could occur from a work strategy of presenting both tasks to the user, in Bruni, reads on the recited limitation. The rationales for combining the cited references, from the rejection of independent claim 1, also apply here. Regarding claim 7, Bruni/Chen/Moore teaches the following limitations: “The computer-implemented method of claim 1, wherein the one or more active tasks are selected based on a determination that a level of urgency for completion of the one or more active tasks is less than other active tasks of the set of active tasks.” - Bruni discloses, “Based on the user's current cognitive state, the urgency of the observed relevant information, and the current position within the mission, InfoCog evaluates the intervention need and, when determined to be timely, formats the information for best impact and sends it to the front-end interface to provide task-relevant proactive decision support” (col. 22, ll. 20-26). Providing task information based on the urgency of the tasks, in Bruni, reads on the recited limitation. Claims 8 and 11-14, while of different scope relative to claims 1 and 4-7, recites limitations similar to those recited by claims 1 and 4-7. As such the rationales applied to rejection claims 1 and 4-7 also apply for purposes of rejecting claims 8 and 11-14. Limitations recited by claims 8 and 11-14 that do not have a counterpart in claims 1 and 4-7, such as the hardware limitations at the beginning of independent claim 8, are disclose or taught by Bruni (see col. 18, ll. 29-42). Claims 8 and 11-14 are, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore. Regarding claims 15 and 18-21, while the claims are of different scope relative to claims 1 and 4-7 and to claims 8 and 11-14, the claims recite limitations similar to those recited by claims 1, 4-8, and 11-14. As such, the rationales applied to reject claims 1, 4-8, and 11-14 also apply for purposes of rejecting claims 15 and 18-21. Claims 15 and 18-21 are, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore. Regarding claim 23, Bruni/Chen/Moore teaches the following limitations: “The computer-implemented method of claim 1, wherein: the one or more active tasks are associated with corresponding deadlines for completion of the one or more active tasks; and ...” - See the aspects of Bruni that have been referenced above. Bruni also discloses, “critical deadline” (col. 12, l. 22); “higher priority, time-sensitive task” (col. 14, l. 40); and “perceived urgency” (col. 22, l. 7). Tasks having critical deadlines, higher priorities, and more perceived urgencies, in Bruni, reads on the recited limitation. “... the computer-implemented method further comprises: automatically processing a set of remaining tasks through the active task processing machine learning algorithm to determine cognitive load contributions for the set of remaining tasks; and ...” - See the aspects of Bruni that have been referenced above. Computerized processing of tasks to be performed using the machine learning modules to compute cognitive indicators, measures, and states for the tasks, in Bruni, reads on the recited limitation. “... migrating an active task from the one or more active tasks to an active state based on the corresponding deadlines and as a result of a combination of the cognitive load contributions and a cognitive load contribution associated with the active task not exceeding the threshold.” - See the aspects of Bruni that have been referenced above. Handing tasks off to others for performance by critical deadlines, based on the computation of cognitive indicators, measures, and states of the task being within cognitive thresholds when performed by the others, in Bruni, reads on the recited limitation. Regarding claim 25, while the claim is of different scope relative to claim 23, the claim recites limitations similar to those recited by claim 23. As such, the rationales applied to reject claim 23 also apply for purposes of rejecting claim 25. Claim 25 is, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore. Regarding claim 27, while the claim is of different scope relative to claims 23 and 25, the claim recites limitations similar to those recited by claims 23 and 25. As such, the rationales applied to reject claims 23 and 25 also apply for purposes of rejecting claim 27. Claim 27 is, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore. Claims 22, 24, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Bruni, in view of Chen, further in view of Moore, and further in view of U.S. Pat. App. Pub. No. 2022/0335362 A1 to Nikain et al. (hereinafter referred to as “Nikain”). Regarding claim 22, the combination of Bruni, Chen, Moore, and Nikain (hereinafter referred to as “Bruni/Chen/Moore/Nikain”) teaches limitations below that do not appear to be taught in their entirety by Bruni/Chen/Moore: “The computer-implemented method of claim 1, further comprising: updating a pending task interface to add the graphical representation of the one or more active tasks migrated to the suspended state, wherein the pending task interface and the active task interface are distinct.” - See the aspects of Bruni that have been referenced above. While Bruni discloses user interfaces, some details about the user interfaces appear to be lacking. Nikain discloses, “The rectangular graphical objects for the Up Next state 204, the Work In Progress state 206, the On Hold state 208 and the Done state 210 form state graphical representations on the GUI 200. Similarly, the rectangular graphical objects within the state graphical representations, including for example, CvaaS Validation task 212, the Final Physical Design task 216 and the Initial Physical Design task 220 form task graphical representations of each respective task” (para. [0051]) and FIG. 2A). Updating the rectangular graphical objects so they depict tasks in the on hold state, as opposed to other states, and wherein the rectangular graphical objects of the different states are separate (distinct) from each other, in Nikain, reads on the recited limitation. Nikain discloses “business process management visualization” (para. [0001]), similar to the claimed invention and to Bruni/Chen/Moore. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the user interfaces depicting tasks and notifications thereof, of Bruni/Chen/Moore, to show the tasks and notifications in their respective states visually, as in Nikain, to provide clear and quickly discerned visual statuses of tasks, per Nikain (see para. [0052]). Regarding claim 24, while the claim is of different scope relative to claim 22, the claim recites limitations similar to those recited by claim 22. As such, the rationales applied to reject claim 22 also apply for purposes of rejecting claim 24. Claim 24 is, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore/Nikain. Regarding claim 26, while the claim is of different scope relative to claim 22 and to claim 24, the claim recites limitations similar to those recited by claims 22 and 24. As such, the rationales applied to reject claims 22 and 24 also apply for purposes of rejecting claim 26. Claim 26 is, therefore, also rejected under 35 USC 103 as obvious in view of Bruni/Chen/Moore/Nikain. Response to Arguments The applicant’s remarks on pp. 11 and 12 of the Amendment, regarding the patentability of claims 1, 4-8, 11-15, and 18-27, have been considered but are moot in view of the modified grounds of rejection being applied in this Office Action (see the 35 USC 101 and 35 USC 103 sections above for a more detailed explanation of those grounds). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Such prior art includes the following: U.S. Pat. App. Pub. No. 2016/0224939 A1 to Chen et al. discloses, “Systems and methods for creating and sharing tasks over one or more networks are disclosed. In one embodiment, a computer-implemented method for managing tasks over one or more computer systems is disclosed. A task can be stored in a memory device of the one or more computer systems. An e-mail address can be assigned to the task. An e-mail message to the e-mail address can be received from a user. The e-mail message or content from the e-mail message can be associated with the task in the one or more computer systems.” (Chen, Abstract.) U.S. Pat. App. Pub. No. 2018/0324122 A1 to Schwartz discloses, “Intelligent application notification management is provided. A state machine on a communication device is used to retain, sequence, and handle notifications included in a notification queue. It is determined whether a new notification has been received and whether the new notification can be added to a notification queue based on a maximum number of notifications. It may be determined whether the new notification is a duplicate of an existing notification in the notification queue. Notifications in the queue may be combined, reordered, and altered. The notification queue may be modified based on one or more of the following: an attribute, user input, user preference, a system state, or whether an application to which the notification is related is currently active. The notification management system may therefore decide which of the notifications to display and when to display them, such that notifications are presented logically and a user is not overwhelmed.” (Schwartz, Abstract.) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS Y. HO, whose telephone number is (571)270-7918. The examiner can normally be reached Monday through Friday, 9:30 AM to 5:30 PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor, can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS YIH HO/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Aug 31, 2022
Application Filed
May 31, 2025
Non-Final Rejection — §101, §103
Dec 03, 2025
Response Filed
Feb 07, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572893
DECISION SUPPORT SYSTEM OF INDUSTRIAL COPPER PROCUREMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12456126
SYSTEMS AND PROCESSES THAT AUGMENT TRANSPARENCY OF TRANSACTION DATA
2y 5m to grant Granted Oct 28, 2025
Patent 12406215
SCALABLE EVALUATION OF THE EXISTENCE OF ONE OR MORE CONDITIONS BASED ON APPLICATION OF ONE OR MORE EVALUATION TIERS
2y 5m to grant Granted Sep 02, 2025
Patent 12393902
CONTINUOUS AND ANONYMOUS RISK EVALUATION
2y 5m to grant Granted Aug 19, 2025
Patent 12367438
Parallelized and Modular Planning Systems and Methods for Orchestrated Control of Different Actors
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
47%
With Interview (+31.7%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month