Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The following action is in response to the communication(s) received on 11/03/2025.
As of the claims filed 11/03/2025:
Claims 1, 9, and 10 have been amended.
Claim 6 has been canceled.
Claims 1-5 and 7-10 are now pending.
Claims 1 and 9 are independent claims.
Response to Arguments
Applicant’s arguments filed have been fully considered, but are not fully persuasive.
Regarding claim eligibility rejections under 35 USC § 101:
Applicant asserts that the present invention requires a computer to “iterate through a potential massive set of processing training sample set data” and thus does not recite a mental process (Section I, p.7 last ¶). Examiner respectfully submits that processing data, as currently recited, can be performed in the human mind, as the details regarding the processing of data do not sufficiently require a computer. The broadest reasonable interpretation of the size of training sample set data also includes a set which is practically performable in the human mind.
Applicant further asserts that the multi-objective optimization cannot be performed in the human mind, as it is “a complex, computationally intensive task” (Section I, p.8 ¶1). As currently recited, the optimization step is merely a calculation step comprising of two different objectives. A human can practically perform calculations which happen to have multiple objectives with the aid of a pen and paper.
Applicant further asserts in Section 1, p.8 ¶2:
Specifically, the step of "processing the training sample set data to determine training samples conforming to one of the counterfactual rules" requires a computer to iterate through a potentially massive set of "training sample set data."
Examiner respectfully disagrees, as a sample set data does not require a computer in the broadest reasonable interpretation.
Applicant further asserts, in Section 1, p.8 ¶2:
Furthermore, amended claims 1 and 9 explicitly recite "performing multi-objective optimization" on a large candidate set to simultaneously address two fundamentally conflicting objectives: minimization of distance (smallest change in features) and maximization of prediction difference (largest change in outcome). Performing an optimization that balances these two competing objectives on real- world data is a complex, non-algorithmic, computationally intensive task that a human is not equipped to, and cannot practically, perform in their mind.
Firstly, it is unclear how the multi-objective optimization would not be an algorithm if contains multiple calculation steps of minimization and maximization. Additionally, the computational intensity is not sufficiently recited in the claims to require a computer. As currently recited, the calculation step can be performed by a human with the aid of pen and paper
Thus, the claims remain reciting abstract ideas.
(Section II, p.9 ¶1) Applicant further asserts that the fields of transportation, medical treatment, industrial control, and finance integrate the abstract idea into a practical application. Examiner respectfully submits that these limitations are merely fields of use for which the abstract ideas are performed in, and thus cannot be integrated into a practical application.
(section II, p.9 ¶2-3) Applicant further asserts that the present claims solve a technical problem which informs the user how the model generates the prediction results and change undesired prediction results. Examiner respectfully disagrees. Explanation and counterfactual explanation, as currently recited, is not a technology, but rather an abstract idea. For example, the claims do not recite a particular machine learning model which is improved upon. Thus, the improvements in the present invention merely improve upon an abstract idea of explaining a model.
Regarding the prior art rejections under 35 USC § 102 and 35 USC § 103, Applicant’s arguments have been fully considered and are persuasive. Thus, the prior art rejections have been withdrawn.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites a... method, thus a process, one of the four statutory categories of patentable subject matter (Step 1). However, Claim 1 further recites:
determining one or more matching rules to which a sample to be predicted conforms among the plurality of rules, based on the information indicating the plurality of rules, which is an evaluation or judgement that can be performed in the human mind;
generating an explanation model for the machine learning model, wherein the explanation model provides an explanation of a prediction result generated by the machine learning model with respect to a single sample to be predicted, which is an evaluation or judgement that can be performed in the human mind;
generating information indicating one or more counterfactual rules corresponding to the one or more matching rules respectively, which is an evaluation or judgement that can be performed in the human mind;
processing the training sample set data to determine training samples conforming to one of the counterfactual rules, and forming counterfactual candidate set data comprising the determined training samples, which is an evaluation or judgement that can be performed in the human mind;
and performing multi-objective optimization on the counterfactual candidate set data based on a plurality of objective functions, to generate a counterfactual explanation, wherein the counterfactual explanation provides conditions that the sample to be predicted is required to meet to change the prediction result, which is an evaluation or judgement that can be performed in the human mind;
wherein a first objective function among the plurality of objective functions corresponds to minimization of a distance between a training sample in the counterfactual candidate set data and the sample to be predicted, and a second objective function among the plurality of objective functions corresponds to maximization of a difference between a prediction result generated by the machine learning model with respect to the training sample in the counterfactual candidate set data and the prediction result generated by the machine learning model with respect to the sample to be predicted, which is merely a detail of an abstract idea (performing multi-objective optimization on the counterfactual candidate set data).
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of:
A computer-implemented method , as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application;
that is a model for making predictions with respect to samples in at least one of transportation field, medical treatment field, industrial control field, and finance field, which merely specifies the particular field of use or particular technological environment in which the abstract idea is to be performed, which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application;
extracting information indicating a plurality of rules, based on training sample set data for training the machine learning model and corresponding known labels, which is merely an insignificant extra-solution activity of data gathering, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application;
and providing the sample to be predicted with the explanation of the prediction result provided by the explanation model and the generated counterfactual explanation, so that the sample to be predicted can change the prediction result based on the counterfactual explanation, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application;
based on training sample set data for training the machine learning model, which merely specifies the particular field of use or particular technological environment in which the abstract idea is to be performed, which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application.
Thus, the claim is directed towards an abstract idea.
Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because the particular field of use or particular technological environment (MPEP 2106.05(h)), implementation on a computer (MPEP 2106.05(f)),and the activity of data gathering(MPEP 2106.05(g)) cannot provide significantly more, as storing and retrieving information in memory is well understood, routine, and conventional (MPEP 2106.05(d)(II)(iv)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible.
Claim 2, dependent upon Claim 1, further recites
constructing a linear model based on training samples in the training sample set data and based on whether the training sample conforms to the matching rules, which is an evaluation or judgement that can be performed in the human mind;
and fitting the prediction results of the machine learning model using the linear model, to generate the explanation model, which is an evaluation or judgement that can be performed in the human mind.
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 3, dependent upon Claim 1, further recites
the explanation provided by the explanation model comprises each matching rule to which the sample to be predicted conforms and a weight corresponding to the matching rule, which is merely a detail of an abstract idea (the explanation model provides an explanation of a prediction result);
filtering the matching rules to which the sample to be predicted conforms, based on the weights, which is an evaluation or judgement that can be performed in the human mind;
and generating information indicating counterfactual rules corresponding to the filtered matching rules, which is an evaluation or judgement that can be performed in the human mind.
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 4, dependent upon Claim 2, further recites
in the fitting, a difference between a prediction result generated by the linear model with respect to a training sample in the training sample set data and a prediction result generated by the machine learning model with respect to the same training sample is minimized, which is an evaluation or judgement that can be performed in the human mind.
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 5, dependent upon Claim 1, further recites
the matching rule and the counterfactual rule that correspond to each other comprise one or more same features, while each of the features meets opposite conditions between the matching rule and the counterfactual rule, and the matching rule and the counterfactual rule that correspond to each other further comprise prediction results different from each other, which is merely a detail of an abstract idea (the explanation model provides an explanation).
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 7, dependent upon Claim 1, further recites
the multi-objective optimization comprises multi-objective Pareto optimization, and the counterfactual explanation is generated based on calculated Pareto optimal solution, which is merely a detail of an abstract idea (performing multi-objective optimization on the counterfactual candidate set data).
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 8, dependent upon Claim 5, further recites
calculating correlations between features comprised in each of the matching rules and all features that the training sample set data have, which is a mathematical concept;
for each training sample in the training sample set data, deleting, among its features, a feature for which the correlation is lower than a predetermined threshold, which is an evaluation or judgement that can be performed in the human mind;
and forming the counterfactual candidate set data based on the training sample set data for which the features have been deleted, and preforming the multi-objective optimization, which is an evaluation or judgement that can be performed in the human mind.
Thus, the claim recites an abstract idea under Step 2A Prong 1.
Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible.
Claim 9 recites A device, thus a machine, one of the four statutory categories of patentable subject matter. However, Claim 9 recites precisely the abstract ideas and additional elements of Claim 1. Therefore, Step 2A Prong 1 analysis remains the same. As for Step 2A Prong 2 and Step 2B: performance on a computer cannot integrate an abstract idea into a practical application (Step 2A Prong 2) nor provide significantly more than the abstract idea itself (Step 2B) (MPEP 2106.05(f)), Claim 9 is are rejected as subject-matter ineligible for reasons set forth in the rejections of Claims 1.
Claim 10, dependent upon Claim 1, further recites no additional abstract ideas. However:
Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of:
A non-transitory computer-readable storage medium storing a computer program that, when executed by a computer, causes the computer to perform the method of explaining prediction results of a machine learning model according to claim 1, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application.
Thus, the claim is directed towards an abstract idea.
Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122