Prosecution Insights
Last updated: April 19, 2026
Application No. 17/592,196

SYSTEM AND METHOD FOR HETEROGENEOUS MULTI-TASK LEARNING WITH EXPERT DIVERSITY

Final Rejection §101§103§112
Filed
Feb 03, 2022
Examiner
PATEL, LOKESHA G
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Royal Bank Of Canada
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
4y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
56 granted / 74 resolved
+20.7% vs TC avg
Strong +38% interview lift
Without
With
+38.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
20 currently pending
Career history
94
Total Applications
across all art units

Statute-Specific Performance

§101
29.5%
-10.5% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The present application was filed on 02/03/2022. The present application claims provisional application no. 63/145,260 (filed on 02/03/2021). This action is in response to amendments and remarks filed on 12/15/2025. In the current amendments claims 1 and 11-20 have been amended, no claims were canceled, and no claims were added. Claims 1-20 are pending and have been examined. In the current amendment rejection 35 USC 112(b) respect to claims11-20 have been withdrawn. Claims 1 and 11 are the independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C 112(b) or 35 U.S.C 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for application subject to pre-AIA 35 U.S.C 112, the application regards, as the invention. The term “exclusively” in claims 1 and 11 are a relative term which renders the claims indefinite. The term “exclusively” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The Specification disclose “an exclusive expert is randomly assigned to one of the tasks” (see specification para [0046]). For examination purpose, the examiner has interpreted “exclusively” as meaning that the expert model is allocated to only one task and is not shared with any other task. Claims 2-10 depend on claim 1 and do not cure the deficiencies of the claim 1 therefore claims 2-10 are rejected for the same rationales. Claims 12-20 depend on claim 11 and do not cure the deficiencies of the claim 11 therefore claims 12-20 are rejected for the same rationales. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1: Step 1: Claim 1 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: determine a loss following a forward pass - In the context of the claim limitation, this encompasses a mathematical concept of calculating loss. back propagate losses and update weight parameters for the expert models and the gate functions - In the context of the claim limitation, this encompasses a mental process of evaluating weight parameter based on losses. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “at least one processor”; “a memory comprising instructions which, when executed by the processor”, “the multi-task learning network” – these are mere instructions to implement an abstract idea on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). Generic computer programmed with generically-recited AI algorithms (or using the multi-task learning network recited at a high level of generality). See MPEP 2106.05(f). The additional elements of “one exclusive expert model which is exclusively connected to the at least one task before training, and at least one shared expert model accessible by the plurality of tasks” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). The claim also recites “assign expert models to each task…at least one task assigned”; “initialize weight parameters in the expert models and in gate functions”, “provide training inputs…”, “store a final set of weight parameters for use in a trained model for multiple tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Furthermore, the recitations of “assign…”, “initialize…”, “provide…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network) and the recitation of “store…” is directed to Insignificant Extra-Solution Activity that is well known, routine and conventional because the limitation is directed to storing (See MPEP 2106.05(d)(II), “Storing and retrieving information in memory”). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2: Step 1: Claim 2 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “provide input to the trained model to perform the multiple tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “provide…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 3: Step 1: Claim 3 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein each expert model comprises one or more neural networks layers” – these are mere instructions to apply the judicial exception using generic computer programmed with generic computer equipment. See MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 4: Step 1: Claim 4 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 3. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “temporal data is provided as input and the expert models comprise recurrent layers; or non-temporal data is provided as input and the expert models comprise dense layers”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “temporal data…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 5: Step 1: Claim 5 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “wherein the gate functions comprise an exclusivity mechanism for setting expert models to be exclusively connected to one task”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “wherein the gate functions…”, is directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 6: Step 1: Claim 6 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “wherein the gate functions comprise an exclusion mechanism for setting expert models to be connected such that they are excluded from some tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “wherein the gate functions…”, is directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 7: Step 1: Claim 7 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: wherein the steps for each task are repeated for different inputs until a stopping criterion is satisfied - In the context of the claim limitation, this encompasses mental processing of evaluating step of task for criterion to be satisfied. Step 2A Prong 2: Please see analysis of claim 1. Step 2B Analysis: Please see analysis of claim 1. Claim 8: Step 1: Claim 8 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: perform a two-step optimization to balance the tasks on a gradient level - In the context of the claim limitation, this encompasses mathematical concept of calculating optimization on a gradient level. Step 2A Prong 2: Please see analysis of claim 1. Step 2B Analysis: Please see analysis of claim 1. Claim 9: Step 1: Claim 9 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: wherein the two-step optimization comprises a modified model-agnostic meta-learning where task specific layers are not frozen during an intermediate update - In the context of the claim limitation, this encompasses mental processing of evaluating optimization by using updated parameter. Step 2A Prong 2: Please see analysis of claim 8. Step 2B Analysis: Please see analysis of claim 8. Claim 10: Step 1: Claim 10 is directed to a system for training a heterogeneous multi-task learning network, which is directed to a machine, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein at least one individual expert model comprises another multi-task learning network” – these are mere instructions to apply the judicial exception using generic computer programmed with generic computer equipment. See MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 11: Step 1: Claim 11 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: determining a loss following a forward pass - In the context of the claim limitation, this encompasses a mathematical concept of calculating loss. back propagating losses and update weight parameters for the expert models and the gate functions - In the context of the claim limitation, this encompasses a mental processing of evaluating weight parameter based on losses. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “the multi-task learning network” – these are mere instructions to apply the judicial exception using generic computer programmed with generic computer equipment. See MPEP 2106.05(f). The claim also recites “assigning expert models to each task…at least one task assigned:”; “initializing weight parameters in the expert models and in gate functions”, “providing training inputs…”, “storing a final set of weight parameters for use in a trained model for multiple tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). The additional elements of “one exclusive expert model which is exclusively connected to the at least one task before training, and at least one shared expert model accessible by the plurality of tasks” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Furthermore, the recitations of “assigning…”, “initializing…”, “providing…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network) and the recitation of “store…” is directed to Insignificant Extra-Solution Activity that is well known, routine and conventional because the limitation is directed to storing (See MPEP 2106.05(d)(II), “Storing and retrieving information in memory”). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 12: Step 1: Claim 12 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 11. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “providing input to the trained model to perform the multiple tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “providing…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 13: Step 1: Claim 13 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 11. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein each expert model comprises one or more neural networks layers” – these are mere instructions to apply the judicial exception using generic computer programmed with generic computer equipment. See MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 14: Step 1: Claim 14 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 13. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “temporal data is provided as input and the expert models comprise recurrent layers; or non-temporal data is provided as input and the expert models comprise dense layers”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “temporal data…”, are directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 15: Step 1: Claim 15 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 11. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “wherein the gate functions comprise an exclusivity mechanism for setting expert models to be exclusively connected to one task”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “wherein the gate functions…”, is directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 16: Step 1: Claim 16 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 11. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites “wherein the gate functions comprise an exclusion mechanism for setting expert models to be connected such that they are excluded from some tasks”, which recites insignificant extra-solution activities of mere data gathering and output. MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitations of “wherein the gate functions…”, is directed to insignificant extra-solution activity that is well known, routine and conventional because the limitations are directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 17: Step 1: Claim 17 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: wherein the steps for each task are repeated for different inputs until a stopping criterion is satisfied - In the context of the claim limitation, this encompasses mental processing of evaluating step of task for criterion to be satisfied. Step 2A Prong 2: Please see analysis of claim 11. Step 2B Analysis: Please see analysis of claim 11. Claim 18: Step 1: Claim 18 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: perform a two-step optimization to balance the tasks on a gradient level - In the context of the claim limitation, this encompasses mathematical concept of calculating optimization on a gradient level. Step 2A Prong 2: Please see analysis of claim 11. Step 2B Analysis: Please see analysis of claim 11. Claim 19: Step 1: Claim 19 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: The claim recites the limitations: wherein the two-step optimization comprises a modified model-agnostic meta-learning where task specific layers are not frozen during an intermediate update - In the context of the claim limitation, this encompasses mental processing of evaluating optimization by using updated parameter. Step 2A Prong 2: Please see analysis of claim 18. Step 2B Analysis: Please see analysis of claim 18. Claim 20: Step 1: Claim 20 is directed to a computer-implemented method of training a heterogeneous multi-task learning network, which is directed to a process, one of the statutory categories. Step 2A Prong 1: Please see analysis of claim 11. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein at least one individual expert model comprises another multi-task learning network” – these are mere instructions to apply the judicial exception using generic computer programmed with generic computer equipment. See MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element is directed to a mere instruction to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ma (“Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts”) in view of Simard (US10671908B2) and further in view of Tang (“Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations”). Claim 1. Ma teaches a system for training a…multi-task learning network, the system comprising: at least one processor; and a memory comprising instructions which, when executed by the processor, configure the processor to (6.1 Baseline Methods & Page 1935 “All of them are trained together via standard backpropagation” and 3.3 Impact of Task Relatedness & Page 1933 “The model is implemented using TensorFlow [1] and trained using Adam optimizer [25] with the default setting” and Figure 1(c) teaches the MMOE model is implanted using TensorFlow and run the backpropagation corresponding to backpropagation use the processor to run the algorithm, MMOE corresponding to heterogeneous multi-task learning network): assign expert models to each task in the multi-task learning network, at least one task assigned…and at least one shared expert model accessible by the plurality of tasks (4.2 Multi-gate Mixture-of-Experts & Page 1934 “The new model is called Multi-gate Mixture-of-Experts (MMoE) model, where the key idea is to substitute the shared bottom network f in Eq 1 with the MoE layer in Eq 5. More importantly, we add a separate gating network k for each task k… Each gating network can learn to “select” a subset of experts to use conditioned on the input example” teaches each task select different combination of experts which is expert being used only by one task and shared across multiple task); for each task: initialize weight parameters in the expert models and in gate functions (4.2 Multi-gate Mixture-of-Experts & Page 1934 “Each gating network can learn to “select” a subset of experts to use conditioned on the input example. This is desirable for a flexible parameter sharing in the multi-task learning situation” and 6.3 Census-income Data & Page 1936 “we train each method on training dataset 400 times with random parameter initialization and report the results on the test dataset” teaches weight parameter for both expert and gate function); provide training inputs to the multi-task learning network (ABSTRACT & Page 1930 “We also show that the MMoE structure results in an additional trainability benefit, depending on different levels of randomness in the training data and model initialization” teaches training input to MMoE); determine a loss following a forward pass over the multi-task learning network (6.4 Large-scale Content Recommendation & Page 1937 “For the Shared-Bottom model, we implement the shared bottom network as a feed forward neural network with several fully-connected layers with ReLU activation” and 5.2 Trainability & Page 1935 “it’s worth to observe that the lowest losses of all the three models are comparable” teaches performed forward pass and compute losses for each training epoch); and back propagate losses and update weight parameters for the expert models and the gate functions (6.4 Large-scale Content Recommendation & Page 1937 “All models are optimized using mini-batch Stochastic Gradient Descent (SGD) with batch size 1024” and 6.1 Baseline Methods & Page 1935 “r size n and task number k, the weights W, which is a m × n × k tensor, is derived from the following equation: W = Xr1 i1 Xr2 i2 Xr3 i3 S(i1,i2,i3) · U1 (:,i1) ◦ U2 (:,i2) ◦ U3 (:,i3), where tensor S of size r1 × r2 × r3, matrix U1 of size m × r1, U2 of size n × r2, and U3 of size k × r3 are trainable parameters. All of them are trained together via standard backpropagation. r1, r2 and r3 are hyper-parameters” teaches gradient are used to update parameters for the expert and gate functions corresponding loss function update the parameters); and store a final set of weight parameters for use in a trained model for multiple tasks (6.4 Large-scale Content Recommendation & Page 1937 “We show the results after training 2 million steps (10 billion examples with batch size 1024), 4 million steps and 6 million steps. MMoE outperforms other models in terms of both metrics. L2- Constrained and Cross-Stitch are worse than the Shared-Bottom model” teaches after training evaluating model corresponding to storing parameters). Ma does not explicitly teach a heterogeneous multi-task learning network. However, Wang teaches a heterogeneous multi-task learning network (Proposed Model & Page 980 “To solve heterogeneous multi-task learning from a generative model perspective, a natural solution is to model multiple generative processes, one for each task” teaches heterogeneous multi task). Ma and Wang are analogous art because they are both directed to models that target models target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Wang into the disclosed invention of Ma. One of ordinary skill in the arts would have been motivated to make this modification because of the following, learn heterogeneous tasks within framework “easily extended to new tasks by specifying the corresponding generative processes” and “better leverage information across tasks, and achieve state-of-the-art results on clinical topic modeling, procedure recommendation, and admission-type prediction” (Wang, Page 986, Conclusions). Ma in view of Wang does not explicitly teach one exclusive expert model which is exclusively connected to the at least one task before training. However, Tang teaches one exclusive expert model which is exclusively1 connected to the at least one task before training (Figure 4 and 5, Figure 1(e), 4 PROGRESSIVE LAYERED EXTRACTION & Page 273 “we propose a Progressive Layered Extraction (PLE) model with a novel sharing structure design in this section. First, a Customized Gate Control (CGC) model that explicitly separates shared and task-specific experts is proposed. Second, CGC is extended to a generalized PLE model with multi-level gating networks and progressive separation routing for more efficient information sharing and joint learning” teaches one customized, exclusive task-specific expert CGC model is exclusively connected to one task before training). Ma, Wang, and Tang are analogous art because they are each directed to models that target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Tang into the disclosed invention of Ma in view of Wang. One of ordinary skill in the arts would have been motivated to make this modification because of the following, unlike MMOE which teats all experts equally, PLE separates task-common and task-specific experts with “progressive” routing for “significant improvement”, and CGC further reduce cross task interference and dynamically balances tasks, thereby enhancing overall performance of the MMOE system (Tang, Page 271 2.2 Multi-Task Learning in Recommender Systems and Page 273, 4.1 Customized Gate Control). Claim 2. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches wherein the at least one processor is configured to provide input to the trained model to perform the multiple tasks (4.2 Multi-gate Mixture-of-Experts & Page 1934 “Each gating network can learn to “select” a subset of experts to use conditioned on the input example. This is desirable for flexible parameter sharing in the multi-task learning situation” and 5.1 Performance on Data with Different Task Correlations & Page 1934 “All the models are trained with the Adam optimizer and the learning rate is grid searched from [0.0001, 0.001, 0.01]. For each model-correlation pair setting, we have 200 runs with independent random data generation and model initialization. The average results are shown in figure 4” teaches perform multiple task using the training model). Claim 3. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches wherein each expert model comprises one or more neural networks layers (1 INTRODUCTION & Page 1931 “In our paper, each expert is a feed-forward network” teaches expert comprises feed forward neural network). Claim 4. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 3, Ma teaches or non-temporal data is provided as input and the expert models comprise dense layers (1 INTRODUCTION & Page 1931 “In our paper, each expert is a feed-forward network. We then introduce a gating network for each task” and 4.2 Multi-gate Mixture-of-Experts & Page 1934 “we add a separate gating network д k for each task k. More precisely, the output of task k is yk = h k (f k (x)), (6) where f k (x) = Xn i=1 k (x)i fi (x)” and 6.4 Large-scale Content Recommendation & Page 1937 “we conduct experiments on a large-scale content recommendation system in Google Inc., where the recommendations are generated from hundreds of millions of unique items for billions of users. Specifically, given a user’s current behavior of consuming an item, this recommendation system targets at showing the user a list of relevant items to consume next” teaches each expert is a feed forward (dense) neural network wherein the system use no sequence and timestamps data which is user, context features corresponding to non-temporal data). Wang further teaches wherein one of: temporal data is provided as input and the expert models comprise recurrent layers (Figure 1 “The shared GCN (fφ) learns embeddings for ICD codes and admissions, and the embeddings pass through task-specific VAEs” teaches model use graph-based encoder and VAEs and multi task decoders which is directed to using recurrent layers and Experiments & Page 983 “We test our method (GD-VAE) on the MIMIC-III dataset (Johnson et al. 2016), which contains more than 58,000 hospital admissions with 14,567 disease ICD codes and 3,882 procedures ICD codes. For each admission, it consists of a set of disease and procedure ICD codes. Three subsets of the MIMIC-III data are considered, with summary statistics in Table 2” teaches patient visit records, diagnoses and procedures which is directed to long term patient data over time (corresponding to temporal data)). Ma and Wang are analogous art because they are both directed to models that target models target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Wang into the disclosed invention of Ma. One of ordinary skill in the arts would have been motivated to make this modification because of the following, learn heterogeneous tasks within framework “easily extended to new tasks by specifying the corresponding generative processes” and “better leverage information across tasks, and achieve state-of-the-art results on clinical topic modeling, procedure recommendation, and admission-type prediction” (Wang, Page 986, Conclusions). Claim 5. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches wherein the gate functions comprise an exclusivity mechanism for setting expert models to be exclusively connected to one task (4.2 Multi-gate Mixture-of-Experts & Page 1934 “if only one expert with the highest gate score is selected, each gating network actually linearly separates the input space into n regions with each region corresponding to an expert… To understand how introducing separate gating network for each task can help the model learn task-specific information” teaches each task has its own gate which is directed to setting experts to be connected to one task). Claim 6. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches wherein the gate functions comprise an exclusion mechanism for setting expert models to be connected such that they are excluded from some tasks (1 INTRODUCTION & Page 1931 “In our paper, each expert is a feed-forward network. We then introduce a gating network for each task. The gating networks take the input features and output softmax gates assembling the experts with different weights, allowing different tasks to utilize experts differently” and 4.2 Multi-gate Mixture-of-Experts & Page 1934 “Each gating network can learn to “select” a subset of experts to use conditioned on the input example. This is desirable for a flexible parameter sharing in the multi-task learning situation” teaches gating network “select” experts to use conditioned on the input example corresponding to excluding some task). Claim 7. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches wherein the steps for each task are repeated for different inputs until a stopping criterion is satisfied (3.3 Impact of Task Relatedness & Page 1933 “repeat step (1) and (2) hundreds of times with datasets generated independently but control the list of task correlation scores and the hyper-parameters the same” teaches each tasks are repeated for the dataset for hundreds of times (corresponding to stopping criterion is satisfied)). Claim 8. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Ma further teaches optimization to balance the tasks on a gradient level (6.4 Large-scale Content Recommendation & Page 1937 “All models are optimized using mini-batch Stochastic Gradient Descent (SGD) with batch size 1024” teaches optimized using gradient level) Wang further teaches wherein the at least one processor is configured to perform a two-step optimization to balance the tasks (Introduction & Page 979 “The GCN serves as a generator of latent representations for the sub-graphs, while the VAEs are specified to address the different tasks. The model is then optimized jointly over the objectives for all tasks to encourage the GCN to produce representations that can be used simultaneously by all of them” and Topic modeling & Page 985 “Compared with only performing topic modeling, i.e., GD-VAE (T), considering more tasks brings improvements, and the proposed GD-VAE achieves the best performance” teaches first step and second steps are VAEs and GCN train separately and optimized). Ma and Wang are analogous art because they are both directed to models that target models target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Wang into the disclosed invention of Ma. One of ordinary skill in the arts would have been motivated to make this modification because of the following, learn heterogeneous tasks within framework “easily extended to new tasks by specifying the corresponding generative processes” and “better leverage information across tasks, and achieve state-of-the-art results on clinical topic modeling, procedure recommendation, and admission-type prediction” (Wang, Page 986, Conclusions). Claim 9. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 8, Wang further teaches wherein the two-step optimization comprises a modified model-agnostic meta-learning where task specific layers are not frozen during an intermediate update (Figure 1 “Each task operates on a different sub-graph from the admission graph. The shared GCN (fφ) learns embeddings for ICD codes and admissions, and the embeddings pass through task-specific VAEs” and Introduction & Page 980 “At test time, the GCN is used to represent sub-graphs, i.e., collections of shared ICD codes, specialized admissions and their interactions, that feed into different task-specific VAEs. We test our model on the three tasks described above. Experimental results show that the jointly learned representation for the admission graph indeed improves the performance of all tasks relative to the individual task model” teaches update VAEs and GCN corresponding to two step optimization, both task modules are updated corresponding to task specific layers are not frozen). Ma and Wang are analogous art because they are both directed to models that target models target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Wang into the disclosed invention of Ma. One of ordinary skill in the arts would have been motivated to make this modification because of the following, learn heterogeneous tasks within framework “easily extended to new tasks by specifying the corresponding generative processes” and “better leverage information across tasks, and achieve state-of-the-art results on clinical topic modeling, procedure recommendation, and admission-type prediction” (Wang, Page 986, Conclusions). Claim 10. Ma in view of Wang further in view of Tang teaches the system as claimed in claim 1, Wang further teaches wherein at least one individual expert model comprises another multi-task learning network (Proposed Model & Page 980 “For the k-th task, pθk (·) represents a generative model (i.e., a stochastic decoder) with parameters θk, and p(zk) is the prior distribution for latent code zk. The corresponding inference network for zk consists of two parts: (i) a deterministic encoder fφ(·) shared across all tasks to encode each xk into xˆk = fφ(xk) independently; and (ii) an encoder with parameters ψk to stochastically map xˆk into latent code zk” teaches decoder (expert) generate multiple outputs (corresponding to expert comprises multiple task)). Ma and Wang are analogous art because they are both directed to models that target models target multi task setups and are optimized for task specific losses. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Wang into the disclosed invention of Ma. One of ordinary skill in the arts would have been motivated to make this modification because of the following, learn heterogeneous tasks within framework “easily extended to new tasks by specifying the corresponding generative processes” and “better leverage information across tasks, and achieve state-of-the-art results on clinical topic modeling, procedure recommendation, and admission-type prediction” (Wang, Page 986, Conclusions). Claim 11. Claim 11 recites analogous limitations to claim 1. Therefore, claim 11 is rejected based on the same rationale as claim 1, discussed above. Claim 12. Claim 12 recites analogous limitations to claim 2. Therefore, claim 12 is rejected based on the same rationale as claim 2, discussed above. Claim 13. Claim 13 recites analogous limitations to claim 3. Therefore, claim 13 is rejected based on the same rationale as claim 3, discussed above. Claim 14. Claim 14 recites analogous limitations to claim 4. Therefore, claim 14 is rejected based on the same rationale as claim 4, discussed above. Claim 15. Claim 15 recites analogous limitations to claim 5. Therefore, claim 15 is rejected based on the same rationale as claim 5, discussed above. Claim 16. Claim 16 recites analogous limitations to claim 6. Therefore, claim 16 is rejected based on the same rationale as claim 6, discussed above. Claim 17. Claim 17 recites analogous limitations to claim 7. Therefore, claim 17 is rejected based on the same rationale as claim 7, discussed above. Claim 18. Claim 18 recites analogous limitations to claim 8. Therefore, claim 18 is rejected based on the same rationale as claim 8, discussed above. Claim 19. Claim 19 recites analogous limitations to claim 9. Therefore, claim 19 is rejected based on the same rationale as claim 9, discussed above. Claim 20. Claim 20 recites analogous limitations to claim 10. Therefore, claim 20 is rejected based on the same rationale as claim 10, discussed above. Response to Arguments Applicant's arguments filed on 12/15/2025 with respect to 35 U.S.C. 101 rejections of claims 1-20 have been fully considered but they are not persuasive. With respect to the 35 U.S.C. 101 rejection of claims 1-20, applicant asserts, “Claim 1 recites a multi-task learning network which has a specific structure in which expert models are assigned to different tasks in the network, and where at least one expert model is exclusively connected to only one task. As described throughout the application, and as noted below, this specific architecture provides technical advantages and improvements over previous approaches in certain applications. For example, the claimed architecture of the present application helped the model to generalize better on the testing set and described overfitted relative to the Ma reference's architecture (as cited by the Examiner). See present application paragraph [0104]. As such, the claims are directed to a specific multi-task network architecture including connections and gating between tasks and expert models which do not make sense as a mental process, and do not exist outside the context of an artificial machine learning architecture. As such, the claims do not recite an abstract idea such as a mental process. Step 2A - Prong 2 Additionally, even if Prong 1 is not satisfied, it is clear that the combination of the elements integrates the approach into a practical application. The proposed computer system describes a specific multi-task network which when trained improves the functioning of the model. The disclosure clearly describes hot [sic – how] the claimed embodiments offer an improvement over existing multi-task approaches by providing a particular network architecture that improves generalization by inducing diversity among expert models (see. para [0032]), and support the learning of unbalanced heterogeneous tasks, in which some tasks may be more susceptible to overfitting, more challenging to learn, or operate a different loss scales. (see. para. [0033])” (Remarks Pg. 6). Examiner Response: The examiner respectfully disagrees. The claim recites that “one exclusive expert model which is exclusively connected to the at least one task before training”, can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). The applicant asserts that the claimed architecture improves generalization and reduces overfitting compared to Ma’s architecture. While these technical advantages may be true in practice, they are not reflected in the claim language itself. The claim only describes assigning expert models to tasks, with one model connected to a single task, and does not explain how these results are achieved. The claimed feature is a high-level organizational concept without additional inventive steps. Furthermore, the applicant’s argument about learning unbalanced heterogeneous tasks, where some tasks may overfit or have different loss scales, is also not reflected in the claim. The claim does not describe a specific way to solve the problems. The claims do not describe how the architecture addresses these challenges in a specific or technical way. Therefore, the rejections under 35 U.S.C. 101 are maintained for claims 1-20. With respect to the 35 U.S.C. 101 rejections of claims 1-20, applicant asserts, “The claims are also similar to those in In re Desjardins2 which Director Squires' Memo explains are patent eligible because they do not merely use a computer to perform a calculation; they improve the machine learning model itself. For it least these reasons, claims 1-20 are patent eligible and comply with 35 USC 101” (Remarks Pg. 7). Examiner Response: The examiner respectfully disagrees. Regarding applicant’s apparent reliance on the decision of the Appeals Review Panel in Ex parte Desjardins, No. 2024-000567 (P.T.A.B. Sept. 26, 2025), in Desjardins, unlike in the claims at issue here, the appellants specifically argued that the claimed invention “address[es] challenges in continual learning and model efficiency by reducing storage requirements and preserving task performance across sequential training”. Desjardins, op. at 7. That is, the appellant in Desjardins specifically alleged that the claimed subject matter improves machine learning itself. By contrast, Applicant in the instant case does not point to any specific claim language that characteries an improvement, and does not point to any claim language that is analogous to the claims at issue in Desjardins. Regarding the Director's a memo cited by applicant, the decision in Desjardins based on training machine learning. The eligibility determination in Desjardins was as grounded in a finding that the claims improved machine learning technology, rather than merely applying mathematical calculations and a mental process using pen and paper. In contrast, the present claims do not recite the functioning of training a heterogeneous multi-task learning network. Therefore, the rejections under 35 U.S.C. 101 are maintained. Applicant's arguments filed on 12/15/2025 with respect to 35 U.S.C. 103 rejections of claims 1-20 have been fully considered but they are moot. With respect to the 35 U.S.C. 103 rejection of claims 1-20, applicant asserts, “Contrary to the Office Action's characterizations, these features are not disclosed by Ma. As illustrated below, in Ma, each expert is connected to each Tower and Output. For example, Expert 0 is connected by gates to both Output A and Output B. Similarly, Experts 1 and 2 are each also connected to both Outputs A and B” (remarks Pg. 8). Examiner Response: The examiner respectfully disagrees. This argument has been considered but is moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in this argument. A newly cited prior art, (Tang (“Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations”)) has been applied to teach the limitations referred to in this argument. Therefore, the claims are now rejected under 35 U.S.C. 103 using the newly-cited Tang reference, as detailed above. Conclusion 6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lokesha Patel whose telephone number is (571)272-6267. The examiner can normally be reached 8 AM - 4 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOKESHA PATEL/Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125 1 As indicated above in the section 112(b) rejection of this claim “exclusively” has been interpreted as meaning that the expert model is allocated to only one task and is not shared with any other task. 2 Examiner notes that applicant is apparently referring to Appeals Review Panel in Ex parte Desjardins, No. 2024-000567 (hereinafter “Desjardins”).
Read full office action

Prosecution Timeline

Feb 03, 2022
Application Filed
Jul 10, 2025
Non-Final Rejection — §101, §103, §112
Dec 15, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585938
Consensus Driven Learning
2y 5m to grant Granted Mar 24, 2026
Patent 12572811
CONTROLLABLE AND INTERPRETABLE CONTENT CONVERSION
2y 5m to grant Granted Mar 10, 2026
Patent 12561556
DEVICES, SYSTEMS, METHODS, AND MEDIA FOR DOMAIN ADAPTATION USING HYBRID LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12536454
TODDLER-INSPIRED BAYESIAN LEARNING METHOD AND COMPUTING APPARATUS FOR PERFORMING THE SAME
2y 5m to grant Granted Jan 27, 2026
Patent 12530615
INTELLIGENT OVERSIGHT OF MULTI-PARTY ENGAGEMENTS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.0%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month