Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,376

Task Learning System and Method, and Related Device

Non-Final OA §101§102§103
Filed
Apr 28, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Cloud Computing Technologies Co. Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the claims filed 4/28/2023. Claims 1-20 are presented for examination. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: Task Learning System and Method for Dynamic Inference Model Generation. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites “A task learning system, comprising”; therefore, it is directed to the statutory category of a machine. Step 2A Prong 1: The claim recites, inter alia: generate, when an inference task corresponding to the input sample is an unknown task, an inference model for the unknown task based on at least one task attribute of the first task attributes and a corresponding first task model of the task models, wherein the at least one task attribute corresponds to the input sample; and perform inference on the input sample using the inference model to obtain a target inference result: These limitations recite a mentally performable process with the aid of pen and paper of using judgement to evaluate data and attributes to generate a result by performing inference using an observed inference model. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: a task learning system, comprising: a knowledge base configured to store first task attributes and task models corresponding to the first task attributes; and a task processing apparatus coupled to the knowledge base and configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception corresponding to generating an inference model and performing inference. See MPEP 2106.05(f). obtain an input sample: These additional elements merely recite insignificant extra-solution activity of mere data gathering, e.g. obtaining an input sample, as all uses of the judicial exception of generating an inference model and performing inference require the provided input sample. See MPEP 2106.05(g). Step 2B: The additional elements from Step 2A Prong 2 include invoking computer machinery to apply the underlying judicial exception and insignificant extra-solution activity of data gathering recited by "obtain an input sample" which is a well-understood routine and conventional activity similar to presenting offers and gathering statistics see MPEP 2106.05(d)(II). Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 2 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: determine a target task attribute of the input sample based on the input sample and a subset of the first task attributes that correspond to the input sample; and determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to determine a target task attribute and determining that the inference task is the unknown task based on observations, evaluations, and judgments. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 3 Step 1: a machine, as in claim 2. Step 2A Prong 1: The claim recites, inter alia: determine, based on a difference between the target task attribute and the at least one task attribute, that the inference task is the unknown task: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to determine that the inference task is the unknown task based on calculating differences. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 4 Step 1: a machine, as in claim 2. Step 2A Prong 1: The claim recites, inter alia: determine, based on any one or more of a confidence of performing inference on the input sample using each task model of the task models, a model migration rate, or a task model quality of the task models, that the inference task is the unknown task: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to determine that the inference task is the unknown task based on calculating rates and evaluating confidence and quality. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 5 Step 1: a machine, as in claim 2. Step 2A Prong 1: The claim recites, inter alia: generate the inference model for the unknown task based on the target task attribute, the at least one task attribute, and the corresponding first task model: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to generate rules for the inference model based on evaluating data and attributes. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 6 Step 1: a machine, as in claim 5. Step 2A Prong 1: The claim recites, inter alia: generate the inference model based on the target task attribute, the first task attributes, the task models, and the task relationship: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to generate the inference model based on evaluating data, attributes, and a task relationship. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the knowledge base is further configured to store a task relationship that comprises one or more of a subordinate relationship or a migration relationship, and wherein the task processing apparatus is further configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception of generating the inference model corresponding to storing data. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include invoking computer machinery to apply the underlying judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 7 Step 1: a machine, as in claim 5. Step 2A Prong 1: The claim recites, inter alia: determine a plurality of candidate task models in the task models based on the target task attribute; and use the plurality of candidate task models as the inference model: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to determine a plurality of candidate task models, such as their rules and protocols, based on evaluating a target task attribute and to use the plurality of candidate task models as the inference model, such as incorporating the rules as guidance. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 8 Step 1: a machine, as in claim 5. Step 2A Prong 1: The claim recites, inter alia: determine, based on the target task attribute, second training samples of the first training samples respectively corresponding to a plurality of candidate task models in the task models: These limitations recite a mentally performable process and mathematical concepts of using judgement to "determine... second training samples" based on evaluating a target task attribute, and performing mathematical calculations to "retrain one or more candidate task models". Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the knowledge base is further configured to store first training samples corresponding to the first task attributes, and wherein the task processing apparatus is further configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception of determining training samples. See MPEP 2106.05(f). retrain one or more candidate task models of the plurality of candidate task models based on the second training samples; and use, as the inference model, the one or more candidate task models: These additional elements are recited at a high level of generality reciting results of retraining and using the one or more candidates task models as the inference model but fail to provide any inventive particulars or details as to how the retraining occurs, e.g. supervised, unsupervised, hybrid retraining, etc., and no details as to how more than one candidate task models can be used as the inference model, e.g. ensembling, merging models, etc. thus these limitations merely amounts to “apply it” or equivalent instructions to the abstract idea of determining training samples. Step 2B: The additional elements from Step 2A Prong 2 include mere instructions to implement an abstract idea on a computer and with “apply it” or equivalent instructions. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 9 Step 1: a machine, as in claim 5. Step 2A Prong 1: The claim recites, inter alia: determine, based on the target task attribute, second training samples of the first training samples respectively corresponding to a plurality of candidate task models in the task models: These limitations recite a mentally performable process and mathematical concepts of using judgement to determine second training samples based on evaluating a target task attribute. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the knowledge base is further configured to store first training samples corresponding to the first task attributes: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception, and storing data in a knowledge base is electronic recordkeeping. See MPEP 2106.05(f). perform training based on the second training samples to obtain a new task model; and use the new task model as the inference model: These additional elements are recited at a high level of generality reciting results of performing training and using the new task model as the inference model but fail to provide any inventive particulars or details as to how to perform the training, e.g. supervised, unsupervised, hybrid training, etc., and no details as to how the determination/transition between a prior model to the new task model occurs thus these limitations merely amounts to “apply it” or equivalent instructions to the abstract idea of determining training samples. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include mere instructions to implement an abstract idea on a computer and with “apply it” or equivalent instructions. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 10 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: generate, when the inference task corresponding to the input sample is the unknown task, the inference model based on the at least one task attribute and the corresponding first task model; and perform inference on the input sample using the inference model to obtain the target inference result: These limitations recite a mentally performable process with the aid of pen and paper of using judgement to evaluate data and attributes to generate the inference model and perform inference to obtain a result. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: a model determiner deployed in a cloud or an edge side network and configured to; an inference performer deployed in the edge side network and configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include mere instructions to implement an abstract idea on a computer. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 11 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: determine a target task attribute of the input sample based on the input sample and the at least one task attribute; and determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task: These limitations recite a mentally performable process with the aid of pen and paper of using judgement to determine a target task attribute and determine that the inference task is the unknown task based on observations and evaluations. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: an attribute determiner deployed in an edge side network and configured to; a task determiner deployed in the edge side network and configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception and representing the field of use or technological environment. See MPEP 2106.05(f) and MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include mere instructions to implement an abstract idea on a computer and generally linking the abstract idea to a field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 12 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim depends from claim 1 and thus recites the same judicial exception. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the knowledge base is deployed in a cloud: These additional elements are recited at a high level of generality and merely amount to generally linking the underlying judicial exception to a field of use or technological environment. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the underlying judicial exception to a field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 13 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: update, based on a target task attribute of the input sample and the inference model, one or more task attributes that are stored in the knowledge base and one or more task models that are stored in the knowledge base: These limitations furthers the mentally performable process with the aid of pen and paper of using judgement to update one or more task attributes observed to be stored in the knowledge base and one or more task models that are stored in the knowledge base based on observing a target task attribute of the input sample and the inference model. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 14 Step 1: a machine, as in claim 13. Step 2A Prong 1: The claim depends from claim 13 and thus recites the same judicial exception. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: add the target task attribute and the inference model to the knowledge base, and wherein the knowledge base is further configured to simultaneously store the first task attributes, the target task attribute, the task models, and the inference model: These additional elements are recited at a high level of generality and merely amount to insignificant extra solution activity of data gathering of particular types of data within the gathered data. See MPEP 2106.05(g). Step 2B: The additional elements from Step 2A Prong 2 include insignificant extra-solution activity of data gathering recited by "add the target task attribute and the inference model to the knowledge base, and wherein the knowledge base is further configured to simultaneously store the first task attributes, the target task attribute, the task models, and the inference model" which are well-understood routine and conventional activities similar to electronic recordkeeping see MPEP 2106.05(d)(II). Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 15 Step 1: a machine, as in claim 13. Step 2A Prong 1: The claim recites, inter alia: replace a corresponding task attribute in the knowledge base with the target task attribute; and replace a corresponding task model in the knowledge base with the inference model: These limitations furthers the mentally performable process with the aid of pen and paper of using judgement to evaluate replacement of a corresponding task attribute observed in the knowledge base with the target task attribute and replacement a corresponding task model observed in the knowledge base with the developed inference model. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 16 Step 1: a machine, as in claim 14. Step 2A Prong 1: The claim depends from claim 1 and thus recites the same abstract idea. The claim recites, inter alia: update, based on the target task attribute and the inference model, the at least one task attribute and the corresponding first task model: These limitations furthers the mentally performable process with the aid of pen and paper of using judgement to evaluate updates to the at least one task attribute and the tracked corresponding first task model based on observations of the target task attribute and the inference model rules. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the task processing apparatus comprises a knowledge base updater deployed in a cloud or an edge side network and configured to: These additional elements are recited at a high level of generality and generally links the underlying judicial exception of updating task attribute and corresponding first task model to a particular field of use or technological environment. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the underlying judicial exception to a field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 17 Step 1: a machine, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: determine the target inference result from the inference results: These limitations furthers the mentally performable process with the aid of pen and paper of using judgement to determine the target inference result from observing those results. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein when the inference model comprises a plurality of models, and wherein the task processing apparatus is further configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception and to generally link the underlying judicial exception of determining the inference result to a field of use or technological environment. See MPEP 2106.05(f) and MPEP 2106.05(h). perform inference on the input sample using all models of the plurality of models to obtain inference results that are output by all the models of the plurality of models: These additional elements are recited at a high level of generality reciting results of inference on the input sample using all models of the plurality of models but fail to provide any inventive particulars or details as to how to obtaining inference results that are output by all the models of the plurality of models are achieved, e.g. models run simultaneously in parallel/sequentially and are the inference results raw results or encoded/etc. thus these limitations merely amounts to “apply it” or equivalent instructions to the abstract idea of determining training samples. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include mere instructions to implement an abstract idea on a computer, generally linking the underlying judicial exception to a field of use or technological environment, and mere instructions to implement an abstract idea on a computer and with “apply it” or equivalent instructions. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claims 18-20 Step 1: These claims are directed to “A task leaning method”; therefore, these claims are directed to the statutory category of a process. Step 2A Prong 1: These claims recite the same abstract ideas as in claims 1-3, respectively. Step 2A Prong 2: The judicial exception recited in these claims are not integrated into a practical application. The analysis at this step mirrors that of claims 1-3, respectively. Step 2B: These claims do not contain significantly more than the judicial exception. The analysis at this step mirrors that of claims 1-3, respectively. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-9, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Achille et al. (hereinafter Achille), “TASK2VEC: Task Embedding for Meta-Learning” (2019). Regarding independent Claims 1 and 18, Achille discloses a task learning system and method, comprising (Abstract "We present a simple meta-learning framework..."): a knowledge base configured to store first task attributes and task models corresponding to the first task attributes (Section 4 "a large collection of tasks and models", disclosing a library of 1,460 task embeddings (first task attributes) and 156 pre-trained feature extractors (task models) corresponding to the tasks); and a task processing apparatus coupled to the knowledge base and configured to (Section 4.2 "Model Selection" discloses a system/apparatus executing the meta-learning and model selection algorithms to process tasks): obtain an input sample (Section 3, "Given an observed input image x and an unknown task variable y...", disclosing obtaining an input sample for a novel target task); generate, when an inference task corresponding to the input sample is an unknown task, an inference model for the unknown task based on at least one task attribute of the first task attributes and a corresponding first task model of the task models, wherein the at least one task attribute corresponds to the input sample (Section 4.2, "embed the task and select the feature extractor trained on the most similar task", disclosing that when presented with a novel/new task (an unknown task), the system computes the task embedding for the novel task and selects the pre-trained feature extractor (first task model) trained on the most similar task embedding (at least one task attribute of the first task attributes) to generate the inference model for the new task. The selected task attribute corresponds to the input sample because it is identified as the most similar match to the input sample's own task embedding); and perform inference on the input sample using the inference model to obtain a target inference result (Section 4.2, "select an expert feature extractor that maximizes the classification performance on that task", disclosing using the selected feature extractor (the generated inference model) to perform classification (inference) on the novel task to obtain a classification performance/result). Regarding dependent Claims 2 and 19, Achille further discloses wherein the task processing apparatus is further configured to: determine a target task attribute of the input sample based on the input sample and a subset of the first task attributes that correspond to the input sample (Section 3.1, disclosing computing the TASK2VEC embedding for the novel task (target task attribute) using a probe network based on the input data); and determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task (Section 4.2, disclosing comparing the novel task embedding to the stored embeddings to determine it is a novel/unknown task and select the best model). Regarding dependent Claims 3 and 20, Achille further discloses wherein the task processing apparatus is further configured to determine, based on a difference between the target task attribute and the at least one task attribute, that the inference task is the unknown task (Section 3.3, disclosing using cosine distance or asymmetric distance between embeddings to measure the difference between the target task attribute and the stored task attributes). Regarding dependent Claim 4, Achille further discloses wherein the task processing apparatus is further configured to determine, based on any one or more of a confidence of performing inference on the input sample using each task model of the task models, a model migration rate, or a task model quality of the task models, that the inference task is the unknown task (Section 3.3, disclosing that the asymmetric TASK2VEC distance correlates with transferability between tasks, which reads on the model migration rate, to determine the relationship and select the model for the unknown task). Regarding dependent Claim 5, Achille further discloses wherein the task processing apparatus is further configured to generate the inference model for the unknown task based on the target task attribute, the at least one task attribute, and the corresponding first task model (Section 4.2, disclosing selecting the feature extractor trained on the most similar task based on the comparison of the target task embedding and the stored task embeddings). Regarding dependent Claim 6, Achille further discloses wherein the knowledge base is further configured to store a task relationship that comprises one or more of a subordinate relationship or a migration relationship (Section 2, disclosing Taxonomic distance (subordinate relationship) and Transfer distance (migration relationship)), and wherein the task processing apparatus is further configured to generate the inference model based on the target task attribute, the first task attributes, the task models, and the task relationship (Section 3.4, disclosing MODEL2VEC co-embedding models and tasks to predict the best model given the task distance and relationship). Regarding dependent Claim 8, Achille further discloses wherein the knowledge base is further configured to store first training samples corresponding to the first task attributes (Section 4, disclosing datasets like iNaturalist, CUB-200, etc. stored in the collection), and wherein the task processing apparatus is further configured to: determine, based on the target task attribute, second training samples of the first training samples respectively corresponding to a plurality of candidate task models in the task models (Section 4.2, disclosing selecting tasks and their corresponding training samples); retrain one or more candidate task models of the plurality of candidate task models based on the second training samples; and use, as the inference model, the one or more candidate task models (Section 4.2, "If in addition to training a classifier, we fine-tune the selected expert, error decreases further", disclosing retraining/fine-tuning the selected candidate task model based on the training samples to use as the inference model). Regarding dependent Claim 9, Achille further discloses wherein the task processing apparatus is further configured to: perform training based on the second training samples to obtain a new task model; and use the new task model as the inference model (Section 4, "we trained a linear classifier on top of the expert in order to solve the selected task using the expert", disclosing performing training to obtain a new task model (linear classifier on top of the expert) to use as the inference model). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, 13-15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Achille, as applied in the rejection of claim 1 above, in view of Zhou et al. (hereinafter Zhou), “Learnware: On the Future of Machine Learning" (2016), Regarding dependent Claim 7, Achille teaches all the elements of Claim 5. Achille does not expressly teach determine a plurality of candidate task models in the task models based on the target task attribute; and use the plurality of candidate task models as the inference model. However, Zhou teaches determine a plurality of candidate task models in the task models based on the target task attribute; and use the plurality of candidate task models as the inference model (Zhou, Page 2, "he may find multiple learnwares each meets a part. In such cases, ensemble methods [10] that combine multiple models to use may offer some solutions"). Because Achille and Zhou address the issue of selecting and utilizing pre-trained machine learning models for new tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of combining multiple models using ensemble methods as suggested by Zhou into Achille's system, with a reasonable expectation of success, to adapt Achille's system to determine a plurality of candidate task models in the task models based on the target task attribute; and use the plurality of candidate task models as the inference model. This modification would have been motivated by the desire to improve performance and coverage when a single model does not perfectly match the target task (Zhou Page 2). Regarding dependent Claim 13, Achille teaches all the elements of Claim 1. Achille does not expressly teach update, based on a target task attribute of the input sample and the inference model, one or more task attributes that are stored in the knowledge base and one or more task models that are stored in the knowledge base. However, Zhou teaches update, based on a target task attribute of the input sample and the inference model, one or more task attributes that are stored in the knowledge base and one or more task models that are stored in the knowledge base (Zhou, Page 1, "use his own data to adapt/polish the learnware"; Page 2, "Evolvable means that the learnware should be able to get accustomed to environment change... do the adaptation by itself", and putting the updated models back into the market/knowledge base). Because Achille and Zhou address the issue of managing and utilizing libraries of machine learning models, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of updating and adapting models as suggested by Zhou into Achille's system, with a reasonable expectation of success, to adapt Achille's system to update, based on a target task attribute of the input sample and the inference model, one or more task attributes that are stored in the knowledge base and one or more task models that are stored in the knowledge base. This modification would have been motivated by the desire to allow the system to adapt to non-stationary environments and changing data distributions (Zhou Page 2). Regarding dependent Claim 14, Achille in view of Zhou teach the task learning system of Claim 13, wherein the task processing apparatus is further configured to add the target task attribute and the inference model to the knowledge base, and wherein the knowledge base is further configured to simultaneously store the first task attributes, the target task attribute, the task models, and the inference model (see Zhou Page 1, "The owner of a learnware can put it into a market", disclosing adding the new/adapted model and its specification (task attribute) to the market (knowledge base) alongside existing models). Regarding dependent Claim 15, Achille in view of Zhou teach the task learning system of Claim 13, wherein the task processing apparatus is further configured to: replace a corresponding task attribute in the knowledge base with the target task attribute; and replace a corresponding task model in the knowledge base with the inference model (see Zhou, Page 2, "Evolvable means that the learnware should be able to get accustomed to environment change... do the adaptation by itself", disclosing adapting and replacing the model to fit the new environment). Regarding dependent Claim 17, Achille teaches all the elements of Claim 1. Achille does not expressly teach perform inference on the input sample using all models of the plurality of models to obtain inference results that are output by all the models of the plurality of models; and determine the target inference result from the inference results. However, Zhou teaches perform inference on the input sample using all models of the plurality of models to obtain inference results that are output by all the models of the plurality of models; and determine the target inference result from the inference results (Page 2, disclosing ensemble methods that combine multiple models to use, which inherently involves obtaining results from all models in the ensemble and determining a target result, e.g., by voting or averaging). Because Achille and Zhou address the issue of selecting and utilizing pre-trained machine learning models for new tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of combining multiple models using ensemble methods as suggested by Zhou into Achille's system, with a reasonable expectation of success, to adapt Achille's system to perform inference on the input sample using all models of the plurality of models to obtain inference results that are output by all the models of the plurality of models; and determine the target inference result from the inference results. This modification would have been motivated by the desire to improve performance and coverage when a single model does not perfectly match the target task (Zhou Page 2). Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Achille, as applied in the rejection of claim 1, in view of FISCHBACHER (hereinafter Fischbacher), US 2020/0050953 A1. Regarding dependent Claim 10, Achille teaches all the elements of Claim 1. Achille does not expressly teach a model determiner deployed in a cloud or an edge side network and configured to generate, when the inference task corresponding to the input sample is the unknown task, the inference model based on the at least one task attribute and the corresponding first task model; and an inference performer deployed in the edge side network and configured to perform inference on the input sample using the inference model to obtain the target inference result. However, Fischbacher teaches a model determiner deployed in a cloud or an edge side network and configured to generate, when the inference task corresponding to the input sample is the unknown task, the inference model based on the at least one task attribute and the corresponding first task model; and an inference performer deployed in the edge side network and configured to perform inference on the input sample using the inference model to obtain the target inference result (Fischbacher, FIG. 1, showing client device 110A and server 130A; [0050], [0100] "The machine may operate in the capacity of a server or a client device in client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment", and performing contextual determinations on client devices (edge)). Because Achille and Fischbacher address the issue of deploying machine learning and contextual analysis systems across computing environments, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of distributed cloud and edge architecture as suggested by Fischbacher into Achille's system, with a reasonable expectation of success, to adapt Achille's system to include a model determiner deployed in a cloud or an edge side network and configured to generate, when the inference task corresponding to the input sample is the unknown task, the inference model based on the at least one task attribute and the corresponding first task model; and an inference performer deployed in the edge side network and configured to perform inference on the input sample using the inference model to obtain the target inference result. This modification would have been motivated by the desire to distribute computational load efficiently, leveraging cloud resources for heavy tasks and edge resources for low-latency inference, which is a standard architectural choice in distributed machine learning systems to optimize performance and resource utilization (Fischbacher [0027]). Regarding dependent Claim 11, Achille teaches all the elements of Claim 1. Achille does not expressly teach an attribute determiner deployed in an edge side network and configured to determine a target task attribute of the input sample based on the input sample and the at least one task attribute; and a task determiner deployed in the edge side network and configured to determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task. However, Fischbacher teaches an attribute determiner deployed in an edge side network and configured to determine a target task attribute of the input sample based on the input sample and the at least one task attribute; and a task determiner deployed in the edge side network and configured to determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task (FIG. 1, showing client device 110A and server 130A; [0050], [0100] "The machine may operate in the capacity of a server or a client device in client-server network environment", and performing contextual determinations on client devices (edge)). Because Achille and Fischbacher address the issue of deploying machine learning and contextual analysis systems across computing environments, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of distributed cloud and edge architecture as suggested by Fischbacher into Achille's system, with a reasonable expectation of success, to adapt Achille's system to include an attribute determiner deployed in an edge side network and configured to determine a target task attribute of the input sample based on the input sample and the at least one task attribute; and a task determiner deployed in the edge side network and configured to determine, based on the target task attribute, the at least one task attribute, and the corresponding first task model, that the inference task is the unknown task. This modification would have been motivated by the desire to distribute computational load efficiently, leveraging edge resources for low-latency attribute and task determination, which is a standard architectural choice in distributed machine learning systems to optimize performance and resource utilization (Fischbacher [0027]). Regarding dependent Claim 12, Achille teaches all the elements of Claim 1. Achille does not expressly teach the knowledge base is deployed in a cloud. However, Fischbacher teaches the knowledge base is deployed in a cloud ([0035] data store 106 on server computers). Because Achille and Fischbacher address the issue of deploying machine learning and contextual analysis systems across computing environments, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of distributed cloud and edge architecture as suggested by Fischbacher into Achille's system, with a reasonable expectation of success, to adapt Achille's system so that the knowledge base is deployed in a cloud. This modification would have been motivated by the desire to distribute computational load efficiently, leveraging cloud resources for heavy tasks like knowledge base storage, which is a standard architectural choice in distributed machine learning systems to optimize performance and resource utilization (Fischbacher [0027]). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Achille in view of Zhou, as applied in the rejection of Claim 14 above, and further in view of Fischbacher. Regarding dependent Claim 16, Achille in view of Zhou teaches all the elements of Claim 14. Achille in view of Zhou does not expressly teach a knowledge base updater deployed in a cloud or an edge side network and configured to update, based on the target task attribute and the inference model, the at least one task attribute and the corresponding first task model. However, Fischbacher teaches a knowledge base updater deployed in a cloud or an edge side network and configured to update, based on the target task attribute and the inference model, the at least one task attribute and the corresponding first task model (FIG. 1, showing client device 110A and server 130A; [0050], [0100] "The machine may operate in the capacity of a server or a client device in client-server network environment", and performing contextual determinations on client devices (edge)). Because Achille, in view of Zhou, and Fischbacher address the issue of deploying machine learning and contextual analysis systems across computing environments, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of distributed cloud and edge architecture as suggested by Fischbacher into Achille and Zhou’s system, with a reasonable expectation of success, to adapt the system to include a knowledge base updater deployed in a cloud or an edge side network and configured to update, based on the target task attribute and the inference model, the at least one task attribute and the corresponding first task model. This modification would have been motivated by the desire to distribute computational load efficiently, leveraging cloud or edge resources for updating the knowledge base, which is a standard architectural choice in distributed machine learning systems to optimize performance and resource utilization (Fischbacher [0027]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Mar 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month