CTNF 18/300,807 CTNF 100540 DETAILED ACTION This action is responsive to the application filed on 04/14/2023. Claims 1-20 are pending in the case. Claims 1, 15, and 20 are independent claims. Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/14/2023 is being considered by the examiner. 07-30-03-h AIA Claim Interpretation 07-30-03 AIA The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 07-30-06 This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: "a task defining component that adds..." in claim 1 "a training component that trains..." in claim 1 “a selection component that selects…” in claim 12 “a partitioning component that partitions…” in claim 12 “an inferencing component that applies…” in claim 14 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 07-30-02 AIA The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 07-34-01 Claims 1-14 and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitations “a task defining component that adds...”, “a training component that trains…”, “a selection component that selects…”, “a partitioning component that partitions…”, and “an inferencing component that applies…” in claims 1, 12, and 14 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Applicant’s specification paragraph 0043 and Figure 1 provide support for a generic computer which includes the “task defining component”, the “training component”, the “selection component”, the “partitioning component”, and the “inferencing component”. However, there is no specific algorithm as to how the task defining component performs the adding function nor how the training component performs the training nor how the selection component performs the selecting function nor how the partitioning component performs the partitioning function nor how the inferencing component performs the application. Thus, there is insufficient disclosure of the corresponding structure, material, or acts for performing the entire claimed function. Therefore, the claims are indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. 07-34-23 Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 2-11 and 13 are rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Regarding claim 16, the claim recites “wherein the separately tuning and crystallizing comprises separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels comprises in association with achieving a defined performance criterion for the one or more additional inferencing tasks” in lines 1-4. It is unclear what “achieving a defined performance criterion…” is “in association with”. Thus, the claim is rendered indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor regards as the invention. For examination purposes this limitation has been interpreted to mean “wherein the separately tuning and crystallizing comprises separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels comprises in association with achieving a defined performance criterion for the one or more additional inferencing tasks”. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1 : Step 1 Statutory Category: Claim 1 is directed to a system, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 1 recites, in part, “adds one or more task-specific channels to a backbone neural network adapted to perform a primary inferencing task to generate a multi-task neural network model, wherein each channel of the one or more task-specific channels comprises task-specific elements respectively associated with different layers of the backbone neural network”. This limitation is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Further, the claim recites: “wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical concept, see MPEP § 2106.04(a)(2)(I). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular the claim recites: “a system”, “a memory that stores computer-executable components”, “a processor that executes the computer-readable components stored in the memory”, “a task defining component”, and “a training component”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “trains the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task”. This limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP §2106.05(g). Step 2B Significantly More: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “a system”, “a memory that stores computer-executable components”, “a processor that executes the computer-readable components stored in the memory”, “a task defining component”, and “a training component” amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the additional element “trains the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task” amounts to adding insignificant extra-solution activity to the judicial exception and further, is well‐understood, routine, and conventional as taught by activity is supported under Berkheimer Option 2, Hu et al., U.S. Patent Application Publication No. 20230290134, Paragraph 0065, Lines 6-15, “The training is performed by using common techniques where (1) the training uses a given dataset with facial image regions, annotated attributes, and the neural network structure described above. (2) The training sets initial parameters, training hyper-parameters, such as the batch size, the number of iterations, learning rate schedule, and so forth. (3) The training then updates parameters by optimizing a multi-task loss function until convergence or to a last iteration, and (4) final parameters are saved as the final model”. The claim is not patent eligible. Regarding claim 2 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels as constrained by an optimization function that controls optimal values of the task specific elements based on a defined performance criterion for the one or more additional inferencing tasks and one or more additional resource optimization objectives for the multi-task neural network model”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 3 , the rejection of claim 2 is incorporated, and further, the claim recites: “wherein the one or more additional resource optimization objectives comprise at least one of, minimizing an overall memory footprint of the multi-task neural network model or minimizing an overall latency of the multi-task neural network model”. This limitation is a continuation of the “wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels as constrained by an optimization function that controls optimal values of the task specific elements based on a defined performance criterion for the one or more additional inferencing tasks and one or more additional resource optimization objectives for the multi-task neural network model” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 4 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein respective task-specific elements of different channels of the one or more task-specific channels are independent from one another within the multi-task neural network model”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 5 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the training component separately tunes and crystallizes respective task-specific elements of one channel of the one or more of the task-specific channels without affecting other channels of the one or more task-specific channels, and without affecting any backbone elements of the backbone neural network”. This limitation is a continuation of the “wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 6, the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the task-specific elements of each channel of the one or more task-specific channels are connected to one or more backbone elements of the backbone neural network”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 7 , the rejection of claim 6 is incorporated, and further, the claim recites: “wherein the task-specific elements comprise task-specific filters, and wherein the one or more backbone elements comprise backbone filters of the backbone neural network”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 8 , the rejection of claim 7 is incorporated, and further, the claim recites: “wherein the task-specific filters receive one-way information flow from any of the backbone filters to which they are connected”. This limitation is an additional element that amounts to insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). The claim is not patent eligible. Regarding claim 9 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the backbone neural network comprises an encoder network or a decoder network and wherein the task defining component adds the one or more task-specific channels to the encoder network, the decoder network or both the encoder network and the decoder network”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 10 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the separately tuning comprises determining an optimal amount of the task-specific filters to be included in the different layers and wherein the crystallizing comprises freezing the task-specific filters at the optimal amount”. This limitation recites mental processes in addition to those identified in the rejection of the parent claim. Thus, the claim recites a judicial exception. Further, the claim recites: “wherein the task-specific elements include task-specific filters”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 11 , the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the separately tuning comprises separately tuning task-specific filter weights respectively associated with each channel of the one or more task-specific channels”. This limitation is a continuation of the “wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. Further, the claim recites: “wherein the task-specific elements include task-specific filters”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 12 , the rejection of claim 1 is incorporated, and further, the claim recites: “selects a subset of the different inferencing tasks”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “partitions multi-task neural network model into a submodel adapted to perform the subset of the different inferencing tasks”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Thus, the claim recites judicial exceptions. Further, the claim recites: “wherein as a result of the training, the multi-task neural network model is adapted to perform a set of different inferencing tasks consisting of the primary inferencing task and the one or more additional inferencing tasks”. This limitation is an additional element that amounts to insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, the limitation is well‐understood, routine, and conventional as taught by activity is supported under Berkheimer Option 2, Hu et al., U.S. Patent Application Publication No. 20230290134, Paragraph 0065, Lines 6-15, “The training is performed by using common techniques where (1) the training uses a given dataset with facial image regions, annotated attributes, and the neural network structure described above. (2) The training sets initial parameters, training hyper-parameters, such as the batch size, the number of iterations, learning rate schedule, and so forth. (3) The training then updates parameters by optimizing a multi-task loss function until convergence or to a last iteration, and (4) final parameters are saved as the final model”; a person of ordinary skill in the art would recognize that training a model to optimize a multi-task loss function would result in a model adapted to perform all of the tasks. Further, the claim recites: “a selection component” and “a partitioning component”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Limitations that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 13 , the rejection of claim 12 is incorporated, and further, the claim recites: “wherein the subset comprises two or more of the different inferencing tasks”. This limitation is a continuation of the “selects a subset of the different inferencing tasks” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 14 , the rejection of claim 13 is incorporated, and further, the claim recites: “applies the sub-model to corresponding input data for the subset of the different inferencing tasks and generates corresponding inference outputs”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Thus, the claim recites a judicial exception. Further, the claim recites: “an inferencing component”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Limitations that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 15 : Step 1 Statutory Category: Claim 15 is directed to a method, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial exception: Claim 15 recites, in part, “adding, …, one or more task-specific channels to a backbone neural network adapted to perform a primary inferencing task to generate a multi-task neural network model, wherein the adding comprises adding task-specific elements to different layers of the backbone neural network for each channel of the one or more task-specific channels”. This limitation is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Further, the claim recites: “separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical concept, see MPEP § 2106.04(a)(2)(I). Step 2A Prong 2 Integration into a practical application: This judicial exception is not integrated into a practical application. In particular the claim recites: “by a system comprising a processor” and “by the system”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “training, …, the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task”. This limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP §2106.05(g). Step 2B Significantly more: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “by a system comprising a processor” and “by the system” amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the additional element “training, …, the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task” amounts to adding insignificant extra-solution activity to the judicial exception, and further, is well‐understood, routine, and conventional as taught by activity is supported under Berkheimer Option 2, Hu et al., U.S. Patent Application Publication No. 20230290134, Paragraph 0065, Lines 6-15, “The training is performed by using common techniques where (1) the training uses a given dataset with facial image regions, annotated attributes, and the neural network structure described above. (2) The training sets initial parameters, training hyper-parameters, such as the batch size, the number of iterations, learning rate schedule, and so forth. (3) The training then updates parameters by optimizing a multi-task loss function until convergence or to a last iteration, and (4) final parameters are saved as the final model”. The claim is not patent eligible. Regarding claim 16, the rejection of claim 15 is incorporated, and further, the claim recites: “wherein the separately tuning and crystallizing comprises separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels comprises in association with achieving a defined performance criterion for the one or more additional inferencing tasks and at least one of, minimizing an overall memory footprint of the multi-task neural network model, or minimizing an overall latency of the multi-task neural network model”. This limitation is a continuation of the “separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 17, the rejection of claim 15 is incorporated, and further, claim 17 is substantially similar to claim 4 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 18 , the rejection of claim 15 is incorporated, and further, the claim recites: “wherein the task-specific elements comprise task-specific filters, wherein the backbone neural network comprises backbone filters respectively associated with the different layers, wherein at least some of the task-specific filters are connected to at least some of the backbone filters”. These limitations amount to generally linking the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “wherein the task-specific filters receive one-way information flow from any of the backbone filters to which they are connected”. This limitation is an additional element that amounts to insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). The claim is not patent eligible. Regarding claim 19 , the rejection of claim 15 is incorporated, and further, claim 19 is substantially similar to claim 5 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 20 : Step 1 Statutory Category: Claim 20 is directed to a machine which falls under one of the four statutory categories. Step 2A Prong 1 Judicial exception: Claim 20 recites, in part, “adding one or more task-specific channels to a backbone neural network adapted to perform a primary inferencing task to generate a multi-task neural network model, wherein each channel of the one or more task-specific channels comprises task-specific elements respectively associated with different layers of the backbone neural network”. This limitation is the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Further, the claim recites: “separately tuning and crystallizing the task-specific elements of each channel of the one or more task-specific channels”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical concept, see MPEP § 2106.04(a)(2)(I). Further, the claim recites: “executing a subset of the one or more task-specific channels on corresponding input data for the subset to perform corresponding inferencing tasks of the subset”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical concept, see MPEP § 2106.04(a)(2)(I). Step 2A Prong 2 Integration into a practical application: This judicial exception is not integrated into a practical application. In particular the claim recites: “a non-transitory machine-readable storage medium”, “executable instructions”, and “a processor”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “training the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task, …, wherein as a result of the training, the multi-task neural network model is adapted to perform a set of different inferencing tasks consisting of the primary inferencing task and the one or more additional inferencing tasks”. This limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP §2106.05(g). Step 2B Significantly more: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “a non-transitory machine-readable storage medium”, “executable instructions”, and “a processor” amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the additional element: “training the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task, …, wherein as a result of the training, the multi-task neural network model is adapted to perform a set of different inferencing tasks consisting of the primary inferencing task and the one or more additional inferencing tasks” amounts to adding insignificant extra-solution activity to the judicial exception and further, is well‐understood, routine, and conventional as taught by activity is supported under Berkheimer Option 2, Hu et al., U.S. Patent Application Publication No. 20230290134, Paragraph 0065, Lines 6-15, “The training is performed by using common techniques where (1) the training uses a given dataset with facial image regions, annotated attributes, and the neural network structure described above. (2) The training sets initial parameters, training hyper-parameters, such as the batch size, the number of iterations, learning rate schedule, and so forth. (3) The training then updates parameters by optimizing a multi-task loss function until convergence or to a last iteration, and (4) final parameters are saved as the final model”; a person of ordinary skill in the art would recognize that training a model to optimize a multi-task loss function would result in a model adapted to perform all of the tasks. The claim is not patent eligible. Claim Rejections - 35 USC § 102 07-07-aia AIA 07-07 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 07-08-aia AIA (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 07-15 AIA Claim s 1, 4-15, and 17-20 are rejected under 35 U.S.C. 102( a)(1 ) as being anticipated by Meyerson et al., International Application Published under the Patent Cooperation Treaty (PCT) No. WO 2019157257, hereinafter referred to as "Meyerson" . Regarding claim 1, Meyerson teaches A system, comprising: a memory that stores computer-executable components; and a processor that executes the computer-executable components stored in the memory (Meyerson, Paragraph 0051, Lines 2-6, “Computer system 500 includes at least one central processing unit (CPU) 572 that communicates with a number of peripheral devices via bus subsystem 555. These peripheral devices can include a storage subsystem 510 including, for example, memory devices and a file storage subsystem 536, user interface input devices 538, user interface output devices 576, and a network interface subsystem 574”) , wherein the computer-executable components comprise: a task defining component that adds one or more task-specific channels to a backbone neural network adapted to perform a primary inferencing task to generate a multi-task neural network model, wherein each channel of the one or more task-specific channels comprises task-specific elements respectively associated with different layers of the backbone neural network (Meyerson, Paragraph 0036, “FIG. 2 illustrates one implementation of jointly training 200 encoder-decoder pairs on corresponding classification tasks. FIG. 2 also shows how model 101 includes multiple processing pipeline in which an input (such as an image) is first processed through the encoder 102 to generate the encoding, and the encoding is then separately fed as input to each of the different decoders. This way, numerous encoder-decoder pairs are formed; all of which have the same underlying encoder 102”; The encoder is considered to be the “backbone neural network” and generating the encoding is considered to be the “primary inferencing task”; the groups of decoders shown in Figure 2 are considered to include the “task-specific channels”; a person of ordinary skill in the art would recognize that a decoder is “respectively associated” with different layers of an encoder ) ; and a training component that trains the one or more task-specific channels to perform one or more additional inferencing tasks that are respectively different from one another and the primary inferencing task (Meyerson, Paragraph 0032, “In implementations, each decoder comprises learnable components, parameters, and hyperparameters that can be trained by backpropagating errors using an optimization algorithm. The optimization algorithm can be based on stochastic gradient descent (or other variations of gradient descent like batch gradient descent and mini-batch gradient descent). Some examples of optimization algorithms that can be used to train each decoder are Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, and Adam”; Meyerson, Paragraph 0036, Lines 1-2, “FIG. 2 illustrates one implementation of jointly training 200 encoder-decoder pairs on corresponding classification tasks”; See also Meyerson, Figure 2, Groups of decoders can be seen that are “Specific to Classification Task [1…n]”) , wherein the training component separately tunes and crystallizes the task-specific elements of each channel of the one or more task-specific channels (Meyerson, Paragraph 0091, Lines 19-26, “In Figure 6 (a) is the underlying model wherein all task inputs are embedded through an underlying model that is completely shared; (b) shows multiple decoders, wherein each task has multiple decoders (solid black lines) each projecting the embedding to a distinct classification layer; (c) shows parallel traversal of model space, wherein the underlying model coupled with a decoder defines a task model and task models populate a model space, with current models shown as black dots and previous models shown as gray dots; and (d) shows multiple loss signals. Each current task model receives a distinct loss to compute its distinct gradient”; Computing a “distinct loss to compute its distinct gradient” and selecting the best model for a given task is considered to be “separately tun[ing]” each channel ; Meyerson, Paragraph 0095, “After training, the best model for a given task is selected from the final joint model, and used as the final model for that task (Eq.3). Of course, using multiple decoders with identical architectures for a single task does not make the final learned predictive models more expressive. It is therefore natural to ask whether including additional decoders has any fundamental effect on learning dynamics. It turns out that even in the case of linear decoders, the training dynamics of using multiple pseudo-tasks strictly subsumes using just one”; Meyerson, Paragraph 0098, Lines 3-6, “Next, the Freeze (F) DecInitialize freezes all decoder weights except θ D t I for each task … One decoder is left unfrozen so that the optimal model for each task can still be learned”; Freezing all the decoders apart from the best one is considered to be crystallizing the task-specific channels ) . Regarding claim 4, the rejection of claim 1 is incorporated, and further, Meyerson teaches wherein respective task-specific elements of different channels of the one or more task-specific channels are independent from one another within the multi-task neural network model (Meyerson, Paragraph 0085, Lines 2-5, “The decoders are grouped into sets of decoders in dependence upon the corresponding classification tasks and respectively receive the encoding as input from the encoder, thereby forming encoder- decoder pairs which operate independently of each other”) . Regarding claim 5, the rejection of claim 1 is incorporated, and further, Meyerson teaches wherein the training component separately tunes and crystallizes respective task-specific elements of one channel of the one or more of the task-specific channels without affecting other channels of the one or more task-specific channels, and without affecting any backbone elements of the backbone neural network (Meyerson, Paragraph 0091, Lines 19-26, “In Figure 6 (a) is the underlying model wherein all task inputs are embedded through an underlying model that is completely shared; (b) shows multiple decoders, wherein each task has multiple decoders (solid black lines) each projecting the embedding to a distinct classification layer; (c) shows parallel traversal of model space, wherein the underlying model coupled with a decoder defines a task model and task models populate a model space, with current models shown as black dots and previous models shown as gray dots; and (d) shows multiple loss signals. Each current task model receives a distinct loss to compute its distinct gradient”; Computing a “distinct loss to compute its distinct gradient” and selecting the best model for a given task is considered to be “separately tun[ing]” each channel ; Meyerson, Paragraph 0095, “After training, the best model for a given task is selected from the final joint model, and used as the final model for that task (Eq.3). Of course, using multiple decoders with identical architectures for a single task does not make the final learned predictive models more expressive. It is therefore natural to ask whether including additional decoders has any fundamental effect on learning dynamics. It turns out that even in the case of linear decoders, the training dynamics of using multiple pseudo-tasks strictly subsumes using just one”; Meyerson, Paragraph 0098, Lines 3-6, “Next, the Freeze (F) DecInitialize freezes all decoder weights except θ D t I for each task … One decoder is left unfrozen so that the optimal model for each task can still be learned”; Freezing all the decoders apart from the best one is considered to be crystallizing the task-specific channels ) . Regarding claim 6, the rejection of claim 1 is incorporated, and further, Meyerson teaches wherein the task-specific elements of each channel of the one or more task-specific channels are connected to one or more backbone elements of the backbone neural network (Meyerson, Paragraph 0085, Lines 2-5, “The decoders are grouped into sets of decoders in dependence upon the corresponding classification tasks and respectively receive the encoding as input from the encoder, thereby forming encoder- decoder pairs which operate independently of each other”; See also Meyerson, Figure 2 and Figure 6, where the decoder sets can be seen connected to the encoder) . Regarding claim 7, the rejection of claim 6 is incorporated, and further, Meyerson teaches wherein the task-specific elements comprise task-specific filters (Meyerson, Paragraph 0073, Lines 3-5, “Each decoder is a convolutional neural network (abbreviated CNN) with a plurality of convolution layers arranged in a sequence from lowest to highest” A person of ordinary skill in the art would recognize that a convolutional neural network is made up of filters ) , and wherein the one or more backbone elements comprise backbone filters of the backbone neural network (Meyerson, Paragraph 0070, “The encoder is a convolutional neural network (abbreviated CNN) with a plurality of convolution layers arranged in a sequence from lowest to highest. The encoding is convolution data”; A person of ordinary skill in the art would recognize that a convolutional neural network is made up of filters ) . Regarding claim 8, the rejection of claim 7 is incorporated, and further, Meyerson teaches wherein the task-specific filters receive one-way information flow from any of the backbone filters to which they are connected (Meyerson, Paragraph 0085, Lines 2-5, “The decoders are grouped into sets of decoders in dependence upon the corresponding classification tasks and respectively receive the encoding as input from the encoder, thereby forming encoder- decoder pairs which operate independently of each other”; See also Meyerson, Figure 2 and Figure 6, where the decoder sets can be seen passing information from the encoder to the decoders; Passing the output of the encoder as the input to the decoders is considered to be “one-way information flow” ) . Regarding claim 9, the rejection of claim 1 is incorporated, and further, Meyerson teaches wherein the backbone neural network comprises an encoder network or a decoder network and wherein the task defining component adds the one or more task-specific channels to the decoder network (Meyerson, Paragraph 0036, “FIG. 2 illustrates one implementation of jointly training 200 encoder-decoder pairs on corresponding classification tasks. FIG. 2 also shows how model 101 includes multiple processing pipeline in which an input (such as an image) is first processed through the encoder 102 to generate the encoding, and the encoding is then separately fed as input to each of the different decoders. This way, numerous encoder-decoder pairs are formed; all of which have the same underlying encoder 102”; The encoder is considered to be the “backbone neural network”; the groups of decoders shown in Figure 2 are considered to include the “task-specific channels”, thus the “task-specific channels” are added to the decoder network ) . It is noted the claim recites alternative language and Meyerson teaches at least one of the alternatives. Regarding claim 10, the rejection of claim 1 is incorporated, and further, Meyerson teaches wherein the task-specific elements include task-specific filters and wherein the separately tuning comprises determining an optimal amount of the task-specific filters to be included in the different layers and wherein the crystallizing comprises freezing the task-specific filters at the optimal amount (Meyerson, Paragraph 0091, Lines 19-26, “In Figure 6 (a) is the underlying model wherein all task inputs are embedded through an underlying model that is completely shared; (b) shows multiple decoders, wherein each task has multiple decoders (solid black lines) each projecting the embedding to a distinct classification layer; (c) shows parallel traversal of model space, wherein the underlying model coupled with a decoder defines a task model and task models populate a model space, with current models shown as black dots and previous models sh