DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 01/02/2026 have been entered.
Claims 1-20 remain pending within the application.
The amendments filed 01/02/2026 are sufficient to overcome each and every objection previously set forth in the Non-Final Office Action mailed 10/01/2025. The objections have been withdrawn.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-6, 8, 9, 11-13, 15, 16, 18, and 19 are rejected under AIA 35 U.S.C. 102(a)(1) as being anticipated by Goyal et al. (Pub. No.: US 2021/0374566 A1), hereafter Goyal.
Regarding claim 1, Goyal discloses:
A method for machine learning of multiple tasks by iteratively building a model zoo, the method comprising:… a multi-task learner implemented using at least one processor (Goyal, Fig. 2 and ¶[0034-0035] teaches the multi-task learner in Fig. 2 implemented using at least one processor)
automatically selecting, by a multi-task learner implemented using at least one processor, a first subset of potentially competing tasks from a plurality of potentially competing tasks (Goyal, Fig. 3 step 1, and Fig. 8 teaches automatically selecting previous tasks and a new task as the first subset of potentially competing tasks from a plurality, where the tasks are shown to be potentially competing in Fig. 8),
training, by the multi-task learner, a first machine learning model on each task of the selected first subset of potentially competing tasks (Goyal, Fig. 3 step 1 and step 2 teaches training and learning task classifiers as training a first machine learning model on each task of the selected first subset of potentially competing tasks),
calculating, by the multi-task learner, a metric of empirical risk of the first model on each task of the first subset of potentially competing tasks (Goyal, Fig. 3 step 1, step 2, and ¶[0048] teaches a weighed error as a metric of empirical risk of the first model on each task of the first subset of potentially competing tasks),
in a second round after the first round, automatically selecting, by the multi-task learner and from the potentially competing tasks, a second subset of potentially competing tasks (Goyal, Fig. 3 line 4 and step 1 teaches a subsequent iteration from the previous round, where now the second subset comprises a new task different from the previous round’s new task) by weighting tasks with lower values of the metric of empirical risk with lower task-specific weights than tasks with higher values of the metric of empirical risk (Goyal, ¶[0020] and [0048-0049] teaches weighting tasks with lower values of the metric of error, i.e. empirical risk, with lower task-specific weights),
training, by the multi-task learner a second machine learning model on each task of the second subset of potentially competing tasks (Goyal, Fig. 3 step 1 and step 2 teaches training and learning task classifiers for this subsequent round as training a second machine learning model on each task of the selected second subset of potentially competing tasks),
combining, by the multi-task learner, the first and second machine learning models to create an ensemble machine-learninq model (Goyal, Fig. 3, step “return” and step 2 teaches combining the task specific classifiers to create an ensemble model by majority vote and boosting),
generating predictions on the first subset of tasks and the second subset of tasks using the ensemble model (Goyal, Fig. 3, step “return”, step 2, and ¶[0066] teaches generating predictions on the tasks using the ensemble model).
Regarding claim 2, Goyal discloses the method of claim 1 (and thus the rejection of claim 1 is incorporated). Goyal further discloses:
wherein selecting the second subset of potentially competing tasks comprises maintaining a vector of the task-specific weights and selecting the second subset of potentially competing tasks based on the task-specific weights (Goyal, Fig. 2, Fig. 3 and ¶[0046] teaches maintaining a vector of task specific weights and selecting the subsequent subsets of competing tasks based on the task-specific weights in the knowledge base).
Regarding claim 4, Goyal discloses the method of claim 1 (and thus the rejection of claim 1 is incorporated). Goyal further discloses:
further comprising revisiting at least a first task from the plurality of potentially competing tasks by adding one or more new models (Goyal, Fig. 3 and ¶[0051-0052] teaches revisiting competing tasks from previous tasks by adding newly learned classifiers).
Regarding claim 5, Goyal discloses the method of claim 4 (and thus the rejection of claim 4 is incorporated). Goyal further discloses:
further comprising maintaining one or more models from previous training rounds not updated in successive training rounds (Goyal, Fig. 2 and ¶[0037] teaches maintaining one or more models from previous training rounds not updated in successive training rounds in a knowledge base).
Regarding claim 6, Goyal discloses the method of claim 1 (and thus the rejection of claim 1 is incorporated). Goyal further discloses:
wherein each of the plurality ofpotentially competing tasks shares a common input domain (Goyal, ¶[0086-0088] teaches each of the plurality of competing tasks shares a common input domain from the same dataset).
Claims 8 and 15 are substantially similar to claim 1, and thus are rejected on the same basis as claim 1.
Claims 9 and 16 are substantially similar to claim 2, thus are rejected on the same basis as claim 2.
Claims 11-13 are substantially similar to claims 4-6, thus are rejected on the same basis as claims 4-6.
Claim 18 is substantially similar to claim 4, thus are rejected on the same basis as claim 4.
Claims 19 is substantially similar to claim 6, thus are rejected on the same basis as claim 6.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Goyal et al. (Pub. No.: US 2021/0374566 A1), hereafter Goyal, in view of Chai et al. ("Multi-task Learning with Gaussian Processes"), hereafter Chai.
Regarding claim 3, Goyal discloses the method of claim 2 (and thus the rejection of claim 2 is incorporated). Goyal further discloses:
wherein selecting the second subset of potentially competing tasks based on the task-specific weights comprises drawing the second subset of potentially competing tasks from a … distribution of the task-specific weights (Goyal, Fig 3 and ¶[0021] teaches drawing the second subset of potentially competing tasks from a distribution of the task-specific weights).
While Goyal teaches drawing the second subset of potentially competing tasks from a … distribution of the task-specific weights, they do not teach this distribution to be a multinomial distribution.
Chai discloses:
drawing … subset of … tasks from a multinomial distribution of the task-specific weights (Chai, page 18, section 2.2.3, paragraph 1 “
PNG
media_image1.png
137
656
media_image1.png
Greyscale
” teaches drawing a subset of tasks from a multinomial distribution of task specific weights using task clustering).
Goyal and Chai are analogous art because they are from the same field of endeavor, task learning and machine learning models.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Goyal to include drawing … subset of … tasks from a multinomial distribution of the task-specific weights, based on the teachings of Chai. One of ordinary skill in the art would have been motivated to make this modification in order to improve predictive performance, as suggested by Chai (Chai, page 129, paragraph 1, line 2).
Claims 10 and 17 are substantially similar to claim 3, thus are rejected on the same basis as claim 3.
Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Goyal et al. (Pub. No.: US 2021/0374566 A1), hereafter Goyal, in view of Moura et al. (Pub. No.: US 2020/0118423 A1), hereafter Moura.
Regarding claim 7, Goyal discloses the method of claim 1 (and thus the rejection of claim 1 is incorporated).
While Goyal discloses potentially competing tasks (Fig. 8), they do not disclose:
learning at least one task-specific adapter for at least one task … having a different input domain from at least one other task …
Moura discloses:
learning at least one task-specific adapter for at least one task … having a different input domain from at least one other task …(Moura, ¶[0052] teaches adapting the model to different data domains as learning at least one task-specific adapter for at least one competing task having a different input domain from at least one other competing task).
Goyal and Moura are analogous art because they are from the same field of endeavor, multi-task learning and machine learning models.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Goyal to include learning at least one task-specific adapter for at least one task … having a different input domain from at least one other task …, based on the teachings of Moura. One of ordinary skill in the art would have been motivated to make this modification in order to achieve a high convergence rate and improve accuracy, as suggested by Moura (Moura, ¶[0063]).
Claims 14 and 20 are substantially similar to claim 7, thus are rejected on the same basis as claim 7.
Response to Arguments
Applicant's arguments filed 01/02/2026 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejection is withdrawn.
Applicant's arguments filed 01/02/2026 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive.
The applicant asserts on page 11 of the remarks “However, there is no description of weighting selection of the new tasks based on model performance on that task from a previous round.”. The Examiner respectfully disagrees, as Fig. 3 and ¶[0048-0049] explicitly teaches weighing the selection of new tasks by weighting the learning sample of a new task based on model performance on that task from a previous iteration round. The examiner also notes that Fig. 3 describes an iterative process (“for t=1 to T do…”) that teaches rounds of selection as denoted in the amended claim 1 (see above 102 rejection for further details).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure.
U.S. Pub No. 20190180188 A1: Liang et al. teaches competing tasks and multitask learning.
U.S. Pub No. 20230196122 A1: Suh et al. teaches competing tasks and multitask learning.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141