Prosecution Insights
Last updated: April 19, 2026
Application No. 18/159,106

STORAGE MEDIUM, MACHINE LEARNING APPARATUS, MACHINE LEARNING METHOD

Non-Final OA §101§103
Filed
Jan 25, 2023
Examiner
ALI, NAYMUR RAHMAN
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
10 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§101
30.0%
-10.0% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the application and claims filed 01/25/2023. Claims 1-9 are pending and have been examined. Claims 1-9 are rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/25/2023 and 11/14/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The present application claims foreign priority based on Japanese Patent Application No. 2022-051602 filed 03/28/2022. The examiner notes that a certified copy (in Japanese) of the above-noted application was retrieved on 3/14/2023. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections Claim 1, 4, and 7 objected to because of the following informalities: "...the labeled training data and the unlabeled training data being reflected the weight of each label". This phrase “being reflected the weight” is grammatically unclear. The phrase appears to be missing one or more words between “reflected” and “the weight”. If supported by the original specification, examiner suggests that one possible way to address these objections would be to amend recitations of “the unlabeled training data being reflected the weight of each label” to read "the unlabeled training data being reflected in the weight of each label. Appropriate correction is required. Claims 2-3, 5-6, and 8-9 which each depend directly from claims 1, 4, and 7, respectively, are objected to based on their respective dependencies from claims 1, 4, and 7. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1Claim 1 is directed to a non-transitory computer-readable storage medium, corresponding to an article of manufacture, which is one of the statutory categories. 2A Prong 1: The claim is directed to an abstract idea. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The limitations below as drafted, under their broadest reasonable interpretation (BRI), in view of the specification, cover concepts performed in the human mind (evaluation, judgement, or opinion based on observed data -corresponding to mental processes which can be done mentally or by pen and paper). “estimating a first label distribution that is a label distribution of unlabeled training data based on … an initial value of a label distribution of a transfer target domain;” (A person mentally or with a pencil and paper can estimate a distribution of observed, unlabeled data and an observed initial value of a label distribution of a transfer target domain); “using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain” (A person can observe labeled training data and mentally correlate/correspond that with a source domain, and identify unlabeled training data and correlate it with the target domain). “acquiring a second label distribution based on the labeled training data;” (A person mentally or with a pen and paper can calculate a second distribution given the labeled training data. For example, given 100 labeled training data that has a mix of Cat and Dog labels, a person can count the number of Dogs and Cats in order to acquire a distribution). “acquiring a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution;” (A person mentally or with a pen and paper can acquire weights for labels in selected training data based on a difference between two observed label distributions). “the labeled training data and the unlabeled training data being reflected the weight of each label.” (A person mentally or with a pen and paper can use evaluation/judgement/opinion to determine weights based on observed training data, the label training data and unlabeled training data that reflects the weight). “based on a classification model”, “the classification model being trained by…”, and “re-training the classification model…” (Regarding the “classification model”, no details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraph 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process.) Therefore, these limitations fall within the “mental processes” grouping of abstract ideas. 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites, the additional elements of: “causes at least one computer to execute a process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) — Examiner ’s note: This additional element is merely using a computer to execute a process (an abstract idea)). The claim as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply the mental process electronically (i.e., “a machine learning program that causes at least one computer to execute a process”) do not amount to significantly more than the judicial exception. The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. Accordingly, at Step 2B, the additional elements do not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 2, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 2 is directed to a non-transitory computer-readable storage medium as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A prong 1: The claim recites: “acquiring a weight related to a first label in the labeled training data based on a ratio between a first proportion of data with the first label in the labeled training data and a second proportion of data estimated to have the first label in the unlabeled training data.” (A person mentally or with a pen and paper can determine a weight that’s a ratio between two proportions). Step 2A prong 2: The claim does not recite additional elements. Step 2B:The claim does not recite additional elements. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. The claim is not patent eligible. Regarding Claim 3, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 3 is directed to a non-transitory computer-readable storage medium as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A prong 1: “training the classification model so as to reduce a difference between a distribution of features of the labeled training data in which the weight has been reflected and a distribution of features of the unlabeled training data.” (This falls under the mental process grouping where reducing a difference is the intended result of the training action. As for “training the classification model…” this also falls under the mental process grouping. No details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraph 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. Step 2A prong 2: The claim does not recite additional elements.Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The claim is not patent eligible. Regarding Claim 4: Step 1: Claim 4 is directed to an apparatus, corresponding to a machine, which is one of the statutory categories. 2A Prong 1: The claim is directed to an abstract idea. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The limitations below as drafted, under their broadest reasonable interpretation (BRI), in view of the specification, cover concepts performed in the human mind (evaluation, judgement, or opinion based on observed data -corresponding to mental processes which can be done mentally or by pen and paper). “estimate a first label distribution that is a label distribution of unlabeled training data based on … an initial value of a label distribution of a transfer target domain;” (A person mentally or with a pencil and paper can estimate a distribution of observed, unlabeled data and an observed initial value of a label distribution of a transfer target domain); “using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain” (A person can observe labeled training data and mentally correlate/correspond that with a source domain, and identify unlabeled training data and correlate it with the target domain). “acquire a second label distribution based on the labeled training data;” (A person mentally or with a pen and paper can calculate a second distribution given the labeled training data. For example, given 100 labeled training data that has a mix of Cat and Dog labels, a person can count the number of Dogs and Cats in order to acquire a distribution). “acquire a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution;” (A person mentally or with a pen and paper can acquire weights for labels in selected training data based on a difference between two observed label distributions). “the labeled training data and the unlabeled training data being reflected the weight of each label.” (A person mentally or with a pen and paper can use evaluation/judgement/opinion to determine weights based on observed training data, the label training data and unlabeled training data that reflects the weight). “based on a classification model”, “the classification model being trained by…”, and “re-train the classification model…” (Regarding the “classification model”, no details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraphs 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process.) Therefore, these limitations fall within the “mental processes” grouping of abstract ideas. 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites, the additional elements of: “causes at least one computer to execute a process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) — Examiner ’s note: This additional element is merely using a computer to execute a process (an abstract idea)). The claim as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply the mental process electronically (i.e., “a machine learning program that causes at least one computer to execute a process”) do not amount to significantly more than the judicial exception. The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. Accordingly, at Step 2B, the additional elements do not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 5, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 5 is directed to an apparatus as depending from claim 4, thus the analysis for patent eligibility of claim 4 is incorporated herein. Step 2A prong 1: The claim recites: “acquire a weight related to a first label in the labeled training data based on a ratio between a first proportion of data with the first label in the labeled training data and a second proportion of data estimated to have the first label in the unlabeled training data.” (A person mentally or with a pen and paper can determine a weight that’s a ratio between two proportions). Step 2A prong 2: The claim does not recite additional elements. Step 2B:The claim does not recite additional elements. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. The claim is not patent eligible. Regarding Claim 6, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 6 is directed to an apparatus as depending from claim 4, thus the analysis for patent eligibility of claim 4 is incorporated herein. Step 2A prong 1: “train the classification model so as to reduce a difference between a distribution of features of the labeled training data in which the weight has been reflected and a distribution of features of the unlabeled training data.” (This falls under the mental process grouping where reducing a difference is the intended result of the training action. As for “training the classification model…” this also falls under the mental process grouping. No details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraph 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. Step 2A prong 2: The claim does not recite additional elements.Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The claim is not patent eligible. Regarding Claim 7: Step 1Claim 7 is directed to a method, corresponding to a process, which is one of the statutory categories. 2A Prong 1: The claim is directed to an abstract idea. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The limitations below as drafted, under their broadest reasonable interpretation (BRI), in view of the specification, cover concepts performed in the human mind (evaluation, judgement, or opinion based on observed data -corresponding to mental processes which can be done mentally or by pen and paper). “estimating a first label distribution that is a label distribution of unlabeled training data based on … an initial value of a label distribution of a transfer target domain;” (A person mentally or with a pencil and paper can estimate a distribution of observed, unlabeled data and an observed initial value of a label distribution of a transfer target domain); “using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain” (A person can observe labeled training data and mentally correlate/correspond that with a source domain, and identify unlabeled training data and correlate it with the target domain). “acquiring a second label distribution based on the labeled training data;” (A person mentally or with a pen and paper can calculate a second distribution given the labeled training data. For example, given 100 labeled training data that has a mix of Cat and Dog labels, a person can count the number of Dogs and Cats in order to acquire a distribution). “acquiring a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution;” (A person mentally or with a pen and paper can acquire weights for labels in selected training data based on a difference between two observed label distributions). “the labeled training data and the unlabeled training data being reflected the weight of each label.” (A person mentally or with a pen and paper can use evaluation/judgement/opinion to determine weights based on observed training data, the label training data and unlabeled training data that reflects the weight). “based on a classification model”, “the classification model being trained by…”, and “re-training the classification model…” (Regarding the “classification model”, no details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraph 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process.) Therefore, these limitations fall within the “mental processes” grouping of abstract ideas. 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites, the additional elements of: “causes at least one computer to execute a process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) — Examiner ’s note: This additional element is merely using a computer to execute a process (an abstract idea)). The claim as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply the mental process electronically (i.e., “a machine learning program that causes at least one computer to execute a process”) do not amount to significantly more than the judicial exception. The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. Accordingly, at Step 2B, the additional elements do not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 8, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 8 is directed to a method as depending from claim 7, thus the analysis for patent eligibility of claim 7 is incorporated herein. Step 2A prong 1: The claim recites: “acquiring a weight related to a first label in the labeled training data based on a ratio between a first proportion of data with the first label in the labeled training data and a second proportion of data estimated to have the first label in the unlabeled training data.” (A person mentally or with a pen and paper can determine a weight that’s a ratio between two proportions). Step 2A prong 2: The claim does not recite additional elements. Step 2B:The claim does not recite additional elements. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. The claim is not patent eligible. Regarding Claim 9, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 9 is directed to a method as depending from claim 7, thus the analysis for patent eligibility of claim 7 is incorporated herein. Step 2A prong 1: “training the classification model so as to reduce a difference between a distribution of features of the labeled training data in which the weight has been reflected and a distribution of features of the unlabeled training data.” (This falls under the mental process grouping where reducing a difference is the intended result of the training action. As for “training the classification model…” this also falls under the mental process grouping. No details of the model are recited and the model is recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language and providing general examples (see, e.g., paragraphs 82 gives a very generic/general example of a classification model and paragraph 8, 72 that merely repeat the claim language), applicant’s specification does not explicitly define nor provide any details of the recited “classification model”. Thus, the claimed “classification model”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper and then manually modified/trained by hand with pen and paper based on a reasonable amount of observed data (i.e., the “the labeled training data and the unlabeled training data”). The model is recited at a high level of generality and therefore is being interpreted as performing a mental process on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. Step 2A prong 2: The claim does not recite additional elements.Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements, taken individually and in combination does not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Yan et al. ("Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation", hereinafter "Yan") in view of non-patent literature Saerens et al. ("Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure", hereinafter "Saerens"). Regarding claim 1, Yan teaches:“A non-transitory computer-readable storage medium” (pg. 949, Section 5, 2nd paragraph (“All experiments are implemented… and run on a PC…”; Examiner’s Note (EN): a PC includes non-transitory computer-readable storage medium). “storing a machine learning program” (pg. 948, section 4; EN: WDAN is the machine learning model which exists as a program and is stored on the PC’s storage medium in order to be executed). “that causes at least one computer to execute a process,” (pg. 5, section 5, 2nd paragraph (“…run on a PC equipped with a NVIDIA GTX 1080 GPU and 32G RAM.”) EN: this denotes the use of a computer to run the system). “the process comprising” (pg. 948, Section 4; EN: describes a multi-step machine learning method, which is the “process” that the program executes). “estimating a first label distribution that is a label distribution of unlabeled training data” ((pg. 949, Section 4 C-step, particularly the portion under equation 14: “The class weight wtc can be estimated by (…), where N is the number of target samples.”) and (pg. 945, abstract: “challenge lies in the fact that the class label in target domain is unavailable”) and (section 4 third paragraph: ” (i) E-step: estimating the class posterior probability… (ii) C-step: assigning pseudo-labels… ” EN: The instant application defines a label distribution in paragraph 22 as “to an appearance frequency of the labels on a class-by-class basis.” The Yan reference finds the class weights by using this equation: Mc /M where Mc is the number of labels in class C divided by the total labels M. Therefore, this is a frequency of the labels on a class-by-class basis. Therefore, class weights in the broadest reasonable interpretation (BRI), falls under the definition of a label distribution. this denotes estimating the target domain class weights which is the “first label distribution” under the broadest reasonable interpretation (BRI). This estimation is based on the “unlabeled training data” (target domain data, whose labels are “unavailable” by assigning pseudo-labels derived from the classifier’s posterior probability.) “based on a classification model” (pg. 949, Section 4, third paragraph: “we first estimate the class posterior probability based on the output of softmax classifier. The pseudo-label to yj is assigned to xtj based on the maximum posterior probability…” EN: this denotes the use of a softmax classifier (a classification model, part of the CNN model) to estimate the class posterior probability which is then used as a pseudo-label. This is directly implemented in the estimation of label distribution of unlabeled data equation. Therefore, the estimation is based on a classification model.) “…” “the classification model being trained by using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain;” (pg. 947, Section 2.1 “…we focus on… unsupervised domain adaptation (UDA), where the labels of all target samples are unknown during training”) and (pg. 948, section 3 “Denote by Ds = {(xsi, ysi )}i=1M the training set from source domain and Dt = {(xtj)}j=1M the test set from the target domain”) EN: this denotes a UDA method that explicitly uses “labeled training data” from the “transfer source domain” (Ds) which has a label (ysi ) and unlabeled training data ((Dt) in its training process. “acquiring a second label distribution based on the labeled training data;” (Section 4, 2nd paragraph: “wsc is estimated based on the source data Dls, i.e., wsc = Mc /M, where Mc is the number of samples of the c-th class” EN: The instant application defines a label distribution in paragraph 22 as “to an appearance frequency of the labels on a class-by-class basis.” The Yan reference finds the class weights by using this equation: Mc /M where Mc is the number of labels in class C divided by the total labels M. Therefore, this is a frequency of the labels on a class-by-class basis. Therefore, class weights in the broadest reasonable interpretation (BRI falls under the definition of a label distribution. This denotes measuring/acquiring the source label distribution from the labeled source domain data, calculating class weights as the proportion of samples in each class). “acquiring a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution” (pg. 949, C-step: “Then the auxiliary weight can be updated with α C = wtc / wsc”). EN: this denotes acquiring ‘acquires’ an ‘auxiliary weight’ (α C ) which is a class-specific weight. This meets the claim limitation of acquiring a weight for ‘each label’ because it calculates a distinct weight for every individual class c. The weight is calculated as the ratio (a form of “difference”) between the “first label distribution” (wtc) and the “second label distribution” (wsc). “re-training the classification model by the labeled training data and the unlabeled training data, the labeled training data and the unlabeled training data being reflected the weight of each label.” (pg. 946, Figure 2: “the proposed weighted MMD removes the effect of class bias by first reweighting source data”) and (pg. 949, M-step: “Fixed α, the subproblem on W can be formulated as, (equation 15)” PNG media_image1.png 198 663 media_image1.png Greyscale EN: Yan’s M-step is the “retraining” step where the “classification model” parameters (W) are updated. This is done by minimizing a loss function L(W) that includes the “weighted MMD” term (MMDl,w). This term explicitly uses the acquired “weight” (α, which is α C for each label) to reweight the source data, thereby “reflecting the weight” in the re-training. Although Yan substantially discloses the claimed invention, Yan does not explicitly disclose “an initial value of a label distribution of a transfer target domain” However, in the same field, analogous art Saerens teaches “an initial value of a label distribution of a transfer target domain” (pages 8-9; PNG media_image2.png 308 509 media_image2.png Greyscale EN: This denotes an initial value which serves as a starting point for the estimation. Here in equation 9, Saerens set this initial value to the known label distribution. Additionally, in the subsequent paragraph, Saerens discloses that this value can be any ‘priori knowledge’ that we may have). Yan and Saerens are both analogous art to the claimed invention because they both involve estimating label distributions from unlabeled data (see, e.g., Yan, Abstract and page 949, and Saerens, page 8-9). Yan teaches the overall process of estimating the label distribution of unlabeled data using a classification model, while Saerens teaches the specific step of utilizing an initial value for that transfer target domain’s distribution to begin the estimation process. Before the effective filing date of the invention it would been obvious to one of ordinary skill the art to combine the teaching of Yan and Saerens in order to provide an explicit starting point (an initial value of label distribution) for the estimation process. A person of ordinary skill in the art would be motivated to replace Yan’s rigid and implicit initialization of auxiliary weights with a more direct and flexible method. Saerens provides this method by teaching an initial value of label distribution which is also flexible. Saerens states “Of course, if we have some a priori knowledge about the values of the prior probabilities, we can take these starting values for the initialization of the p(0)(ωi).” (see, e.g., Saerens, page 9). This quote teaches that the starting point of the iteration is a choice that can be optimized based on available knowledge, rather than being a fixed, implicit assumption as in Yan. Therefore, before the effective filing date of the invention it would been obvious to one of ordinary skill the art to combine the teaching of Yan and Saerens in order to provide an explicit starting point (an initial value of label distribution) for the estimation process (see, e.g., Saerens, page 8-9), as suggested by Saerens. Regarding claim 2, as discussed above, Yan in view of Saerens teaches all of the limitations of claim 1. Yan further teaches: “acquiring a weight related to a first label in the labeled training data based on a ratio between a first proportion of data with the first label in the labeled training data and a second proportion of data estimated to have the first label in the unlabeled training data.” (948-949 section 4, particularly the second paragraph, and the C-Step; EN: this denotes a weight (α C) which is a ratio between wtc and wsc . wtc corresponds to the second proportion (the estimated class proportion in the unlabeled target data) and wsc corresponds to the first proportion (the class proportion of the labeled source data). Regarding claim 3, as discussed above, Yan in view of Saerens teaches all of the limitations of claim 1. Yan further teaches: “wherein the process further comprising training the classification model so as to reduce a difference” (pg. 947, section 2.1, 2nd paragraph; EN: this denotes a strategy in this field which is to create a training method to reduce the difference between the feature distributions. “between a distribution of features of the labeled training data in which the weight has been reflected and” (pg. 948, section 3, equation 8; PNG media_image3.png 203 673 media_image3.png Greyscale EN: this denotes the use of features from the labeled training data which is multiplied by the weight, thus reflecting the weight). “a distribution of features of the unlabeled training data.” (pg. 948, section 3, equation 8; PNG media_image4.png 211 673 media_image4.png Greyscale EN: this denotes the use of features from the unlabeled target data.) Regarding claim 4, Yan teaches: “A machine learning apparatus comprising:” (pg. 949, section 5, 2nd paragraph: “…run on a PC equipped with a NVIDIA GTX 1080 GPU and 32G RAM.” EN: this denotes the “apparatus” that is used to run the machine learning experiments”. “one or more memories” (pg. 949, section 5, 2nd paragraph: “…32G RAM” EN: this denotes the memories of the apparatus.) “one or more processors coupled to the one or more memories” (pg. 949, section 5, 2nd paragraph: “…run on a PC equipped with a NVIDIA GTX 1080 GPU and 32G RAM.” EN: this denotes a PC which inherently teaches a processor (CPU) and requires the processor to be coupled with memory to function.) “and the one or more processors configured to:” (pg. 949, section 5, 2nd paragraph: “All experiments are implemented by using Caffe Toolbox” EN: this denotes the processor of the PC is configured by the software implementation of the WDAN model.) “estimate a first label distribution that is a label distribution of unlabeled training data” ((pg. 949, Section 4 C-step, particularly the portion under equation 14: “The class weight wtc can be estimated by (…), where N is the number of target samples.”) and (pg. 945, abstract: “challenge lies in the fact that the class label in target domain is unavailable”) and (section 4 third paragraph: ” (i) E-step: estimating the class posterior probability… (ii) C-step: assigning pseudo-labels… ” EN: The instant application defines a label distribution in paragraph 22 as “to an appearance frequency of the labels on a class-by-class basis.” The Yan reference finds the class weights by using this equation: Mc /M where Mc is the number of labels in class C divided by the total labels M. Therefore, this is a frequency of the labels on a class-by-class basis. Therefore, class weights in the broadest reasonable interpretation (BRI), falls under the definition of a label distribution. this denotes estimating the target domain class weights which is the “first label distribution” under the broadest reasonable interpretation (BRI). This estimation is based on the “unlabeled training data” (target domain data, whose labels are “unavailable” by assigning pseudo-labels derived from the classifier’s posterior probability.) “based on a classification model” (pg. 949, Section 4, third paragraph: “we first estimate the class posterior probability based on the output of softmax classifier. The pseudo-label to yj is assigned to xtj based on the maximum posterior probability…” EN: this denotes the use of a softmax classifier (a classification model, part of the CNN model) to estimate the class posterior probability which is then used as a pseudo-label. This is directly implemented in the estimation of label distribution of unlabeled data equation. Therefore, the estimation is based on a classification model.) “…” “the classification model being trained by using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain;” (pg. 947, Section 2.1 “…we focus on… unsupervised domain adaptation (UDA), where the labels of all target samples are unknown during training”) and (pg. 948, section 3 “Denote by Ds = {(xsi, ysi )}i=1M the training set from source domain and Dt = {(xtj)}j=1M the test set from the target domain”) EN: this denotes a UDA method that explicitly uses “labeled training data” from the “transfer source domain” (Ds) which has a label (ysi ) and unlabeled training data ((Dt) in its training process. “acquire a second label distribution based on the labeled training data;” (Section 4, 2nd paragraph: “wsc is estimated based on the source data Dls, i.e., wsc = Mc /M, where Mc is the number of samples of the c-th class” EN: The instant application defines a label distribution in paragraph 22 as “to an appearance frequency of the labels on a class-by-class basis.” The Yan reference finds the class weights by using this equation: Mc /M where Mc is the number of labels in class C divided by the total labels M. Therefore, this is a frequency of the labels on a class-by-class basis. Therefore, class weights in the broadest reasonable interpretation (BRI), falls under the definition of a label distribution. This denotes measuring/acquiring the source label distribution from the labeled source domain data, calculating class weights as the proportion of samples in each class). “acquire a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution” (pg. 949, C-step: “Then the auxiliary weight can be updated with α C = wtc / wsc”). EN: this denotes acquiring ‘acquires’ an ‘auxiliary weight’ (α C ) which is a class-specific weight. This meets the claim limitation of acquiring a weight for ‘each label’ because it calculates a distinct weight for every individual class c. The weight is calculated as the ratio (a form of “difference”) between the “first label distribution” (wtc) and the “second label distribution” (wsc). “re-train the classification model by the labeled training data and the unlabeled training data, the labeled training data and the unlabeled training data being reflected the weight of each label.” (pg. 946, Figure 2: “the proposed weighted MMD removes the effect of class bias by first reweighting source data”) and (pg. 949, M-step: “Fixed α, the subproblem on W can be formulated as, (equation 15)” PNG media_image1.png 198 663 media_image1.png Greyscale EN: Yan’s M-step is the “retraining” step where the “classification model” parameters (W) are updated. This is done by minimizing a loss function L(W) that includes the “weighted MMD” term (MMDl,w). This term explicitly uses the acquired “weight” (α, which is α C for each label) to reweight the source data, thereby “reflecting the weight” in the re-training. Although Yan substantially discloses the claimed invention, Yan does not explicitly disclose “an initial value of a label distribution of a transfer target domain” However, in the same field, analogous art Saerens teaches “an initial value of a label distribution of a transfer target domain” (pages 8-9; PNG media_image2.png 308 509 media_image2.png Greyscale EN: This denotes an initial value which serves as a starting point for the estimation. Here in equation 9, Saerens set this initial value to the known label distribution. Additionally, in the subsequent paragraph, Saerens discloses that this value can be any ‘priori knowledge’ that we may have). Yan and Saerens are both analogous art to the claimed invention because they both involve estimating label distributions from unlabeled data (see, e.g., Yan, Abstract and page 949, and Saerens, page 8-9). Yan teaches the overall process of estimating the label distribution of unlabeled data using a classification model, while Saerens teaches the specific step of utilizing an initial value for that transfer target domain’s distribution to begin the estimation process. Before the effective filing date of the invention, it would been obvious to one of ordinary skill the art to combine the teaching of Yan and Saerens in order to provide an explicit starting point (an initial value of label distribution) for the estimation process. A person of ordinary skill in the art would be motivated to replace Yan’s rigid and implicit initialization of auxiliary weights with a more direct and flexible method. Saerens provides this method by teaching an initial value of label distribution which is also flexible. Saerens states “Of course, if we have some a priori knowledge about the values of the prior probabilities, we can take these starting values for the initialization of the p(0)(ωi).” (see, e.g., Saerens, page 9). This quote teaches that the starting point of the iteration is a choice that c
Read full office action

Prosecution Timeline

Jan 25, 2023
Application Filed
Nov 14, 2025
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month