Prosecution Insights
Last updated: April 19, 2026
Application No. 17/959,341

STORAGE MEDIUM, MACHINE LEARNING METHOD, AND MACHINE LEARNING APPARATUS

Non-Final OA §101§102§112
Filed
Oct 04, 2022
Examiner
PHAM, JESSICA THUY
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-18 are pending and examined herein. The drawings and specification are objected to. Claims 6, 7, and 12 are objected to. Claims 8-12 and 14-18 are rejected under 35 U.S.C. 112(b). Claims 1-18 are rejected under 35 U.S.C. 101. Claims 1-18 are rejected under 35 U.S.C. 102. Information Disclosure Statement The attached information disclosure statement(s) (IDS) filed on 12/05/2023, 12/05/2023, and 7/31/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters “#1” and ‘#2’ has been used to designate model creation data, models, model verification data and prediction results. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “STORAGE MEDIUM, MACHINE LEARNING METHOD, AND MACHINE LEARNING APPARATUS FOR CLUSTERING ENSEMBLES”. Claim Objections Claims 6, 7, and 12 objected to because of the following informalities: Claims 6 and 12 state “wherein the verifying is verifying based on …”. This should be corrected to “wherein the verifying is based on …”. Claim 7 states “An machine learning method”. This should be corrected to “A machine learning method”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8-12 and 14-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 8-12 state “The machine learning according to claim …” in the preamble of the claims. It is unclear whether “the machine learning” refers to the “machine learning method” of claim 7 or the “machine learning” in paragraph 3 of claim 7. For purposes of examination, the preambles will be interpreted as “The machine learning method according to claim …”. Claims 14-18 state “The machine learning according to claim …” in the preamble of the claims. It is unclear whether “the machine learning” refers to the “machine learning apparatus” of claim 13 or the “machine learning” in paragraph 5 of claim 13. For purposes of examination, the preambles will be interpreted as “The machine learning apparatus according to claim …”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-18, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter. Claims 1-6 are directed to an article of manufacture, claims 7-12 are directed to a process, and claims 13-18 are directed to a machine. All claims are directed to statutory categories and analysis proceeds. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following are abstract ideas: clustering a plurality of pieces of data; (Clustering data can be practically performed by the human mind. This is a mental process.) generating a first model … that uses data classified into a first group by the clustering; and (Generating a model, interpreted as creating/designing a model, can be practically performed by the human mind. This is a mental process.) verifying output accuracy of the generated first model by using data classified into a second group by the clustering. (Verifying output accuracy is the mental process of evaluation/judgement.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A non-transitory computer-readable storage medium storing a machine learning program that causes at least one computer to execute a process, the process comprising: (This limitation recites generic computer components and processes, which amounts to mere instructions to apply an exception. See MPEP § 2106.05(f).) … by machine learning … (This simply recites generic machine learning. This amounts to mere instructions to apply an exception.) Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, the following is an abstract idea: wherein the clustering is hierarchical clustering. (As discussed in regards to claim 1, clustering is a mental process. One could also perform hierarchical clustering in the human mind. This limitation is part of the mental process of clustering.) Regarding claim 3, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the generating includes machine learning that uses data classified into a third group by the clustering, and (This is the insignificant extra-solution activity of selecting a particular data source to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’, ex. ii and iii.) the verifying is using data classified into a fourth group by the clustering. (This is the insignificant extra-solution activity of selecting a particular data source to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’, ex. ii and iii.) Regarding claim 4, the rejection of claim 1 is incorporated herein. The following are abstract ideas: wherein the first model is generated … that uses first data in the data classified into the first group, and (Generating a model, interpreted as creating/designing a model, can be practically performed by the human mind. This is a mental process.) the process further comprising generating a second model … that uses second data in the data classified into the first group. (Generating a model, interpreted as creating/designing a model, can be practically performed by the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: … by machine learning … (This simply recites generic machine learning. This amounts to mere instructions to apply an exception.) Regarding claim 5, the rejection of claim 4 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the verifying includes acquiring first output accuracy based on a first result output by the first model in response to an input of third data included in the data classified into the second group to the first model, and a second result output by the second model in response to an input of the third data to the second model. (Acquiring data is the known process of receiving data, which amounts to mere instructions to apply an exception. The remainder of the claim recites generic machine learning processes, which also amounts to mere instructions to apply an exception. Regarding claim 6, the rejection of claim 5 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the verifying is verifying based on the first output accuracy, and second output accuracy acquired based on a third result output by the first model in response to an input of fourth data included in the data classified into the second group to the first model, and a fourth result output by the second model in response to an input of the fourth data to the second model. (Acquiring data is the known process of receiving data, which amounts to mere instructions to apply an exception. The remainder of the claim recites generic machine learning processes, which also amounts to mere instructions to apply an exception. Regarding claim 7, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: An machine learning method for a computer to execute a process comprising: (This recites generic machine learning and a generic computer performing generic computer processes. This amounts to mere instructions to apply an exception.) The remainder of claim 7 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 8-12 recite substantially similar subject matter to claims 2-6 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 13, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A machine learning apparatus comprising: (This recites a generic machine learning apparatus, which amounts to mere instructions to apply an exception.) one or more memories; and (This recites a generic computer component, which amounts to mere instructions to apply an exception.) one or more processors coupled to the one or more memories and the one or more processors configured to: (This recites generic computer components performing generic computer processes. This amounts to mere instructions to apply an exception.) The remainder of claim 13 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 14-18 recite substantially similar subject matter to claims 2-6 respectively and are rejected with the same rationale, mutatis mutandis. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tao (“Subspace Selective Ensemble Algorithm Based on Feature Clustering”, 2013.) This reference was submitted by the applicant through the IDS. Regarding claim 1, Tao teaches A non-transitory computer-readable storage medium storing a machine learning program that causes at least one computer to execute a process, the process comprising: (The abstract states "A novel feature subspace ensemble learning algorithm based on feature clustering was proposed to improve ensemble classifier performance, allowing for high dimensional data sets." One of ordinary skill in the art would realize that the algorithm that is proposed is implemented on a computer, which must have a non-transitory computer-readable storage medium storing the algorithm to execute the algorithm. As “ensemble learning” is a type of machine learning, the algorithm is a machine learning algorithm, which when executed by a computer, is a machine learning program.) clustering a plurality of pieces of data; (Page 509 states "In general, the category number of features can’t be predetermined, so we first use hierarchical clustering methods to cluster feature." As features are clustered, data is clustered. Page 511 states "Once one feature is randomly extracted from each cluster, then a feature subspace is generated which consists of M extracted features. When we repeat k times, k feature subsets are obtain." Therefore, the subset data is from the clustering.) generating a first model by machine learning that uses data classified into a first group by the clustering; and (Page 511 states "K-fold cross validation that after the training data are divided equally into K groups we take each subset data as validation set, other K - 1 subset as train set, then   K models are established, fitness function is the average of classification accuracy of K models validation set." The train set is interpreted as the “first group”. As one of ordinary skill in the art would understand, the train set of the first model would be used to train, interpreted as generate, the first model.) verifying output accuracy of the generated first model by using data classified into a second group by the clustering. (Page 511 states "K-fold cross validation that after the training data are divided equally into K groups we take each subset data as validation set, other K - 1 subset as train set, then   K models are established, fitness function is the average of classification accuracy of K models validation set." The validation set is interpreted as the “second group”. As one of ordinary skill in the art would understand, the validation set of the first model would be used to validate the first model. Eq. 8 on page 511 is “ f X = 1 k ∑ k = 1 K f 1 ( k ) Where f 1 ( k ) is the classification accuracy of the k th model. Therefore, the output accuracy of the models, which includes the first model, is verified.) Regarding claim 2, the rejection of claim 1 is incorporated herein. Tao teaches wherein the clustering is hierarchical clustering. (The abstract states "A novel feature subspace ensemble learning algorithm based on feature clustering was proposed to improve ensemble classifier performance, allowing for high dimensional data sets.") Regarding claim 3, the rejection of claim 1 is incorporated herein. Tao teaches wherein the generating includes machine learning that uses data classified into a third group by the clustering, and the verifying is using data classified into a fourth group by the clustering. (Page 511 states "K-fold cross validation that after the training data are divided equally into K groups we take each subset data as validation set, other K - 1 subset as train set, then   K models are established, fitness function is the average of classification accuracy of K models validation set." Page 511 states that “K is 5” meaning that there are 4 train subsets. The second train subset, also used to train the model, is interpreted as the third group. Eq. 8 on page 511 is “ f X = 1 k ∑ k = 1 K f 1 ( k ) Where f 1 ( k ) is the classification accuracy of the k th model.” Therefore, the verifying of the first model is the verifying of all of the base models. Therefore, as one of ordinary skill in the art would understand, there is a second validation subset for the second model, which is interpreted as the fourth group.) Regarding claim 4, the rejection of claim 1 is incorporated herein. Tao teaches wherein the first model is generated by machine learning that uses first data in the data classified into the first group, and the process further comprising generating a second model by machine learning that uses second data in the data classified into the first group. (Page 511 states "K-fold cross validation that after the training data are divided equally into K groups we take each subset data as validation set, other K - 1 subset as train set, then   K models are established, fitness function is the average of classification accuracy of K models validation set." The train set is interpreted as the “first group”. As one of ordinary skill in the art would understand, the train set of the first model would be used to train, interpreted as generate, the first model. Page 511 states that “K is 5” meaning that there are 5 models. The train set of the second model would therefore be used to train/generate the second model.) Regarding claim 5, the rejection of claim 4 is incorporated herein. Tao teaches wherein the verifying includes acquiring first output accuracy based on a first result output by the first model in response to an input of third data included in the data classified into the second group to the first model, and a second result output by the second model in response to an input of the third data to the second model. (Page 511 states "The kappa statistic is defined as follows [7]. Suppose there are P classes, and let N be an N   *   N   square array such that N i , j contains the number of test examples assigned to class i by the first classifier and into class j   by the second classifier, then define Θ 1 as an estimate of the probability that the pairing of classifiers agree and Θ 2 as a measure of the probability that the two classifiers agree by chance. Θ 1 = ∑ i = 1 P N i i m (9) Θ 2 = ∑ i = 1 P ( ∑ i = 1 P N i j m * ∑ j = i = 1 P ( N j i m ) (10) Where m is the total number of test examples. ∑ j = i P N i j m is the fraction of examples that the first classifier assigns to class i and ∑ j = i P N i j m is the fraction of examples the second classifier assigns to class i . With the above definitions, the kappa statistic is calculated as k = Θ 1 - Θ 2 1 - Θ 2 ". As the validation/test data examples are assigned to classes by the two models, they are the input to the two models. Therefore, the first test data point is interpreted as the third data. As explained above, the test data is interpreted as the second group. The kappa statistic is interpreted as the first output accuracy. As they are a part of the kappa statistic, they are used for verification.) Regarding claim 6, the rejection of claim 5 is incorporated herein. Tao teaches wherein the verifying is verifying based on the first output accuracy, and second output accuracy acquired based on a third result output by the first model in response to an input of fourth data included in the data classified into the second group to the first model, and a fourth result output by the second model in response to an input of the fourth data to the second model. (Page 512 states "Compute all kappa value between pairings of base classifier and average identifying accuracy of base classifiers with validation samples." Therefore, the accuracy is based on the first and second models. Page 511 states that the accuracy is “classification accuracy” meaning that the validation samples must have been input to the models to receive the output of classification. Therefore, the results are from the input of the validation/test data, interpreted as the fourth data.) Regarding claim 7, Tao teaches An machine learning method for a computer to execute a process comprising: (The abstract states "A novel feature subspace ensemble learning algorithm based on feature clustering was proposed to improve ensemble classifier performance, allowing for high dimensional data sets." One of ordinary skill in the art would realize that the algorithm that is proposed is implemented on a computer, which must have a non-transitory computer-readable storage medium storing the algorithm to execute the algorithm. As “ensemble learning” is a type of machine learning, the algorithm is a machine learning algorithm, which when executed by a computer, is a machine learning program.) The remainder of claim 7 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 8-12 recite substantially similar subject matter to claims 2-6 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 13, Tao teaches A machine learning apparatus comprising: (The abstract states "A novel feature subspace ensemble learning algorithm based on feature clustering was proposed to improve ensemble classifier performance, allowing for high dimensional data sets." One of ordinary skill in the art would realize that the algorithm that is proposed is implemented on a computer, which must have a non-transitory computer-readable storage medium storing the algorithm to execute the algorithm. As “ensemble learning” is a type of machine learning, the algorithm is a machine learning algorithm, which when executed by a computer, is a machine learning program. The computer is the machine learning apparatus.) one or more memories; and (There must be a memory to store the algorithm.) one or more processors coupled to the one or more memories and the one or more processors configured to: (There must be a processor coupled to the memory in order to execute the algorithm stored on the memory. As the processor executes the algorithm, it is configured to execute the algorithm.) The remainder of claim 13 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 14-18 recite substantially similar subject matter to claims 2-6 respectively and are rejected with the same rationale, mutatis mutandis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA THUY PHAM whose telephone number is (571)272-2605. The examiner can normally be reached Monday - Thursday, 7:30 A.M. - 5:30 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.P./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Oct 04, 2022
Application Filed
Aug 14, 2025
Non-Final Rejection — §101, §102, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month