DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Response to Election filed on 12/29/2025. Claims 1-20 are pending in the case. Claims 8-13 have been withdrawn from consideration. Claims 1, 8, and 14 are independent claims.
Claim Rejections - 35 U.S.C. § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7 and 14-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-7 are directed towards the statutory category of a process. Claims 14-20 are directed towards the statutory category of a machine.
With respect to claim 1:
2A Prong 1: This claim is directed to a judicial exception.
A method… for generating a task model based on meta-learning, the method comprising (mental process);
calculating a task-adaptation loss of the task model, the calculating the task-adaptation loss being based on a result of training the task model by using a training data set (mental process and/or mathematical concept); and
calculating a meta-optimization loss of the updated task model by using a validation data set (mental process and/or mathematical concept).
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
performed by a computing device (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f));
updating the task model based on the task-adaptation loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning); and
further updating the updated task model based on the meta-optimization loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
performed by a computing device (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f));
updating the task model based on the task-adaptation loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning); and
further updating the updated task model based on the meta-optimization loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
With respect to claim 2:
2A Prong 1: This claim is directed to a judicial exception.
the task model comprises: a text classification model configured to provide classification results for texts (mental process).
2A Prong 2: This judicial exception is not integrated into a practical application.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 3:
2A Prong 1: This claim is directed to a judicial exception.
the training data set and the validation data set include a plurality of domain-specific text data (mental process).
2A Prong 2: This judicial exception is not integrated into a practical application.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 4:
2A Prong 1: This claim is directed to a judicial exception.
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
updating of the task model comprises: updating parameters of the task model based on the task-adaptation loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
updating of the task model comprises: updating parameters of the task model based on the task-adaptation loss (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
With respect to claim 5:
2A Prong 1: This claim is directed to a judicial exception.
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
the further updating of the updated task model further comprises: updating a meta-information dictionary including feature information for a plurality of domain-specific texts (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and
updating parameters of a few-shot text embedding generator configured to generate text embeddings corresponding to inputted few-shot text data (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
the further updating of the updated task model further comprises: updating a meta-information dictionary including feature information for a plurality of domain-specific texts (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and
updating parameters of a few-shot text embedding generator configured to generate text embeddings corresponding to inputted few-shot text data (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)).
With respect to claim 6:
2A Prong 1: This claim is directed to a judicial exception.
the task-adaptation loss and the meta- optimization loss are calculated based on a cross-entropy loss function (mental process and/or mathematical concept).
2A Prong 2: This judicial exception is not integrated into a practical application.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 7:
2A Prong 1: This claim is directed to a judicial exception.
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
the updating of the task model comprises: updating parameters of the task model by using gradient descent (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
the updating of the task model comprises: updating parameters of the task model by using gradient descent (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level machine learning).
The remaining claims 14-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more for at least the same reasons as those given above with respect to claims 1-7 with only the addition of generic computer components under step 2A prong 1. Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the "Mental Process" grouping of abstract ideas. A person would readily be able to perform this process either mentally or with the assistance of pen and paper. See MPEP § 2106.04(a)(2). Limitations that merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). These additional elements do not integrate the judicial exception into a practical application under step 2A prong 2. Refer to MPEP §2106.04(d). Moreover, the limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). These additional elements do not recite any additional elements/limitations that amount to significantly more. Accordingly, the claimed invention recites an abstract idea without significantly more.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
Claims 1, 4, 6, 7, 14, 17, 19, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Baik et al. (Baik, Sungyong, Janghoon Choi, Heewon Kim, Dohee Cho, Jaesik Min, and Kyoung Mu Lee. "Meta-learning with task-adaptive loss function for few-shot learning." In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9465-9474. 2021, hereinafter Baik) in view of Schwartz et al. (U.S. Pat. App. Pub. No. 2022/0172036, hereinafter Schwartz).
As to independent claims 1 and 14, Baik teaches:
calculating a task-adaptation loss of the task model, the calculating the task-adaptation loss being based on a result of training the task model by using a training data set (Page 9467, Algorithm 2, line 12, "Compute task-adaptive loss:…");
updating the task model based on the task-adaptation loss (Page 9467, Algorithm 2, line 13, "Perform gradient descent to adapt f to Ti:…");
calculating a meta-optimization loss of the updated task model by using a validation data set (Page 9467, Algorithm 1, title "Meta-learning with task-adaptive loss", and line 11, "Compute the loss on the query set:…"); and
further updating the updated task model based on the meta-optimization loss (Page 9467, Algorithm 1, title "Meta-learning with task-adaptive loss", and line 13, "Perform gradient descent to update weights:…").
Baik does not appear to expressly teach a method performed by a computing device for generating a task model based on meta-learning, the method comprising.
Schwartz teaches a method performed by a computing device for generating a task model based on meta-learning, the method comprising (Title and abstract).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning of Bail to include the Task-adaptive architecture for few-shot learning of Schwartz to create a more effective meta-learning-based method that enable a learned architecture to adapt itself to novel few-shot tasks (see Schwartz at paragraph 5).
As to dependent claims 4 and 17, Baik further teaches the updating of the task model comprises: updating parameters of the task model based on the task-adaptation loss (Page 9467, Algorithm 2, line 13, "Perform gradient descent to adapt f to Ti:…").
As to dependent claims 6 and 19, Baik further teaches the task-adaptation loss and the meta- optimization loss are calculated based on a cross-entropy loss function (Page 9466, right column, "a common loss function corresponding to a task (e.g. crossentropy in classification) during the inner-loop optimization").
As to dependent claims 7 and 20, Baik further teaches the updating of the task model comprises: updating parameters of the task model by using gradient descent (Page 9467, Algorithm 2, line 13, "Perform gradient descent to adapt f to Ti:…").
Claims 2, 3, 15, and 16 are rejected under 35 U.S.C. § 103 as being unpatentable over Baik in view of Schwartz and Tan et al. (U.S. Pat. App. Pub. No. 2020/0251100, hereinafter Tan).
As to dependent claims 2 and 15, the rejection of claim 1 is incorporated.
Baik does not appear to expressly teach the task model comprises: a text classification model configured to provide classification results for texts.
Tan teaches the task model comprises: a text classification model configured to provide classification results for texts (Paragraph 2, "text classification").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning of Bail to include the cross-domain multi-task learning for text classification of Tan to reduce latency associated with training and/or runtime operations (see Tan at paragraph 16).
As to dependent claims 3 and 16, the rejection of claim 1 is incorporated.
Baik does not appear to expressly teach the training data set and the validation data set include a plurality of domain-specific text data.
Tan teaches the training data set and the validation data set include a plurality of domain-specific text data (Paragraph 3, "model is trained, based on text samples corresponding to a respective domain of the plurality of domains").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning of Bail to include the cross-domain multi-task learning for text classification of Tan to reduce latency associated with training and/or runtime operations (see Tan at paragraph 16).
Claims 5 and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Baik in view of Schwartz and He et al. (U.S. Pat. App. Pub. No. 2019/0156210, hereinafter He) and Rusu et al. (Rusu, Andrei A., Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. "Meta-learning with latent embedding optimization." arXiv preprint arXiv:1807.05960 (2018), hereinafter Rusu).
As to dependent claims 5 and 18, the rejection of claim 1 is incorporated.
Baik does not appear to expressly teach the further updating of the updated task model further comprises: updating a meta-information dictionary including feature information for a plurality of domain-specific texts; and updating parameters of a few-shot text embedding generator configured to generate text embeddings corresponding to inputted few-shot text data.
He teaches the further updating of the updated task model further comprises: updating a meta-information dictionary including feature information for a plurality of domain-specific texts (Paragraph 136, "a dictionary trained to map text to a vector representation may be utilized, or such a dictionary may be itself generated via training").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning of Bail to include the machine learning techniques of He to capture long-range dependencies with deep neural networks (see He at paragraph 9).
Rusu teaches updating parameters of a few-shot text embedding generator configured to generate text embeddings corresponding to inputted few-shot text data (Algorithm 1, Latent Embedding Optimization).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning of Bail to include the meta-learning techniques of Rusu to quickly adapt to and incorporate new and unseen information (see Rusu at Introduction).
Conclusion
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Casey R. Garner whose telephone number is 571-272-2467. The examiner can normally be reached Monday to Friday, 8am to 5pm, Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Casey R. Garner/Primary Examiner, Art Unit 2123