Prosecution Insights
Last updated: April 18, 2026
Application No. 18/063,897

EVALUATION METHOD, EVALUATION DEVICE, AND COMPUTER PROGRAM

Final Rejection §101§112
Filed
Dec 09, 2022
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Seiko Epson Corporation
OA Round
2 (Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§101 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 02/27/2026 have been fully considered but they are not fully persuasive. Regarding the 101 rejections, on pages 19-20 of “Remarks” applicant contends that the amended claim 1 does not recite abstract ideas under Step 2A Prong 1. The examiner respectfully disagrees. The amended limitations under the broadest reasonable interpretation recite mental process judicial exceptions. For example, the limitations of determining a self-class spectral similarity, a different-class spectral similarity, a self-class maximum data similarity, and a different-class maximum data similarity under the broadest reasonable interpretation recite steps of comparing samples based on how similar or dissimilar they are which is a step of observation, evaluation, and judgement which is either a mental process of observation/evaluation/judgement (MPEP 2106)). Please see the updated 101 rejections below for details. On pages 20-23 of “Remarks” applicant contends that the amended claim 1 provides a practical application under Step 2A Prong 2. Applicant contends that the amended limitations provide a technical improvement by using creating a detailed second explanatory information used to evaluate a trained model. The examiner respectfully disagrees. First, the amended claim limitations “when at least one of (i) the plurality of pieces of training data or (ii) verification data is used as the evaluation data in the step (a)” are recited in contingent limitation format and the resulting amended limitations, and applicant’s argued improvement, are not required to be performed if the contingency is not met (MPEP 2111.04). The examiner recommends that the claims are amended to remove the contingency. Second, applicant cites on pg. 23 of “Remarks”: “According to this aspect, it is possible to generate and output the second explanatory information indicating a more detailed evaluation of the trained machine learning model using the first explanatory information including more types of information. This makes it possible to efficiently improve the machine learning model based on the evaluation of the machine learning model." See at ⁋[0146] of the Specification as originally filed (applicant emphasis added).” Applicant argues that the improvement uses the second explanatory information to efficiently improve the trained model. However, the claim only recites outputting the second explanatory information at step c and does not give an indication that the second explanatory information is used for improving the trained model. Lastly, it appears that the proposed improvement is only realized because of the specific mental concepts (data and spectral similarities) used in the claim. The judicial exception itself cannot provide the improvement. See MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” The examiner recommends adding additional elements that incorporate the second explanatory information into a practical application as recited in the specification. On pages 23-24 of “Remarks” applicant contends that the amended claim 1 recites additional elements that are not well understood, routine, or conventional activities under Step 2B. The examiner respectfully disagrees. As discussed above, the amended limitations of determining a self-class spectral similarity, a different-class spectral similarity, a self-class maximum data similarity, and a different-class maximum data similarity under the broadest reasonable interpretation recite mental process abstract ideas. Additionally, the mention of generating a first explanatory information by using the aforementioned similarities, under the broadest reasonable interpretation, recite steps of mere data outputting, which has been recognized by the courts as being well-understood, routine, and conventional functions. Specifically, the courts have recognized computer functions directed to mere data outputting as well-understood, routine, and conventional functions when they are claimed in a merely generic manner or as insignificant extra-solution activity (MPEP 2106.05(g)). Therefore, applicant’s arguments regarding the 101 rejections are not persuasive. Regarding the 103 rejections, the 103 rejections are overcome as applicant has amended the independent claims with the previously identified allowable subject matter of now canceled claim 2. Therefore, the 103 rejections are withdrawn. Claim Rejections - 35 USC § 112: Indefiniteness The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 3-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites the limitation (a4) generating the first explanatory information including spectral similarity information related to the spectral similarity, and data similarity information related to the data similarity, when at least one of (i) the plurality of pieces of training data or (ii) verification data is used as the evaluation data in the step (a), the verification data being not used for the training of the machine learning model and including the input data and the prior label associated with the input data. The claim is indefinite as it is unclear how verification data is being used to evaluate the model but not associated with the training of the model. The examiner also notes that the limitation is claimed in contingent limitation format and not required to be met (MPEP 2111.04). Examiner recommends amending the claims to remove the contingency. Claim 1 also recites the limitation (a) inputting evaluation data to the trained machine learning model to generate first explanatory information to be used for an evaluation of the machine learning model. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Claim 1 also recites the limitation when at least one of (i) the plurality of pieces of training data or (ii) verification data is used as the evaluation data in the step (a), the verification data being not used for the training of the machine learning model and including the input data and the prior label associated with the input data. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Regarding claim 5, the claim recites the limitation wherein the step (b1) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the evaluation data. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Regarding claim 10, the claim recites the limitation step (b3) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the evaluation data. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Regarding claim 13, the claim recites the limitation wherein the step (b3) includes generating, as the second explanatory information, information indicating that over-training of the machine learning model occurs when the value indicated by the self-class spectral similarity information is less than the predetermined second self- class spectral similarity threshold. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Regarding claim 16, the claim recites the limitation the step (b) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the abnormal data. There is insufficient antecedent basis for this limitation in the claim because the term “the machine learning model” lacks antecedent basis. For purposes of examination, the term “the machine learning model” is interpreted as the trained machine learning model. Regarding claims 3-17, the claims are rejected for at least their dependency on claim 1. Regarding claims 18 and 19, the claims are similar to claim 1 and rejected under the same rationales. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1and 3-19 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites An evaluation method for a trained machine learning model,. The claim recites a method. A method is one of the four statutory categories of invention. In Step 2A, Prong 1 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process or mathematical concept but for the recitation of generic computer components: (b) using a value indicated by each piece of information included in the first explanatory information, to generate second explanatory information indicating an evaluation of the trained machine learning model; (i.e., the broadest reasonable interpretation includes a step of evaluation and judgement and could be performed mentally or with pen and paper like drawing a conclusion from given data, which is either a mental process of evaluation/judgement (MPEP 2106)). (a2) obtaining a spectral similarity that is a similarity between the feature spectrum and a known feature spectrum included in a known feature spectrum group obtained from an output of the specific layer by inputting the plurality of pieces of training data to the trained machine learning model again, for each known feature spectrum of a plurality of known feature spectra included in the known feature spectrum group; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining how similar data is to one another, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). (a3) obtaining a data similarity that is a similarity between the input data and the evaluation data; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like marking comparing two datasets, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). the step (a2) includes: obtaining a self-class spectral similarity that is the spectral similarity between the feature spectrum and a self-class known feature spectrum of a same class as an evaluation class indicated by the prior label associated with the evaluation data among the known feature spectrum group, for each self-class known feature spectrum of a plurality of self-class known feature spectra; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining how similar a sample is to its given class, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). and obtaining a different-class spectral similarity that is the spectral similarity between the feature spectrum and a different-class known feature spectrum of a class different from the evaluation class among the known feature spectrum group, for each different-class known feature spectrum of a plurality of different-class known feature spectra; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper determining a how similar a sample is to other classes, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). the step (a3) includes obtaining a self-class maximum data similarity that is the similarity between the input data associated with the self-class known feature spectrum that is a calculation source of a self- class maximum spectral similarity indicating a maximum value of a plurality of self-class spectral similarities, and the evaluation data; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining a class that is most similar to a sample, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). and obtaining a different-class maximum data similarity that is the similarity between the input data associated with the different-class known feature spectrum that is a calculation source of a different-class maximum spectral similarity indicating a maximum value of a plurality of the different-class spectral similarities, and the evaluation data; (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining a class that is most different to a sample, which is either a mental process of observation/evaluation/judgement (MPEP 2106)). If the claim limitations, under their broadest reasonable interpretation, covers activities classified under Mental processes: concepts performed in the human mind (including observation, evaluation, judgement, or opinion) (see MPEP 2106.04(a)(2), subsection (III)) or Mathematical concepts: mathematical relationships, mathematical formulas or equations, or mathematical calculations (see MPEP 2106.04(a)(2), subsection (I)). Accordingly, the claim recites an abstract idea. In Step 2A, Prong 2 of the 101 analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: the trained machine learning model being a vector neural network model including a plurality of vector neuron layers and being trained by using a plurality of pieces of training data including input data and a prior label associated with the input data, the evaluation method comprising steps of: (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). (a) inputting evaluation data to the trained machine learning model to generate first explanatory information to be used for an evaluation of the machine learning model; (i.e., the broadest reasonable interpretation of receiving a data instance is mere data gathering, which is an insignificant extra solution activity (MPEP 2106.05(g))). and (c) outputting the generated second explanatory information, (i.e., the broadest reasonable interpretation of outputting a data instance is mere data outputting, which is an insignificant extra solution activity (MPEP 2106.05(g))). wherein the step (a) includes steps of: (a1) inputting the evaluation data to the trained machine learning model to obtain a feature spectrum from an output of a specific layer of the trained machine learning model; (i.e., the broadest reasonable interpretation of receiving a data instance is mere data gathering, which is an insignificant extra solution activity (MPEP 2106.05(g))). and (a4) generating the first explanatory information including spectral similarity information related to the spectral similarity, and data similarity information related to the data similarity, (i.e., the broadest reasonable interpretation of outputting a data instance is mere data outputting, which is an insignificant extra solution activity (MPEP 2106.05(g))). when at least one of (i) the training data or (ii) verification data is used as the evaluation data in the step (a), the verification data being not used for training of the machine learning model and including the input data and the prior label associated with the input data, (i.e., the broadest reasonable interpretation of receiving a data instance is mere data gathering, which is an insignificant extra solution activity (MPEP 2106.05(g))). The examiner notes that this limitation recites a contingent limitation. Contingent limitations are not required to be met as they are conditional to a prior condition (MPEP 2111.04). The examiner recommends the applicant amends the independent claim to fix the contingency. and the step (a4) includes generating the first explanatory information including self-class spectral similarity information related to the self-class maximum spectral similarity, self-class data similarity information related to the self-class maximum data similarity, different-class spectral similarity information related to the different-class maximum spectral similarity, and different-class data similarity information related to the different-class maximum data similarity. (i.e., the broadest reasonable interpretation of outputting a data instance is mere data outputting, which is an insignificant extra solution activity (MPEP 2106.05(g))). Since the claim does not contain any other additional elements, that amount to integration into a practical application, the claim is directed to an abstract idea. In Step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitations (IX-XIV), under the broadest reasonable interpretation, recite steps of mere data gathering/outputting, which has been recognized by the courts as being well-understood, routine, and conventional functions. Specifically, the courts have recognized computer functions directed to mere data gathering/outputting as well-understood, routine, and conventional functions when they are claimed in a merely generic manner or as insignificant extra-solution activity when considering evidence in view of Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018), see USPTO Berkheimer Memorandum (April 2018)). Examiner uses Berkheimer: Option 2, a citation to one or more of the court decisions discussed in MPEP 2106.05(d)(II) as noting well-understood, routine, and conventional nature of the additional elements: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Further, limitation (VIII), under the broadest reasonable interpretation merely recite steps that apply a generic machine learning model, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 3, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 3 recites wherein when the training data is used as the evaluation data in the step (a), the step (b) includes a step of (b1) generating the second explanatory information using a first training comparison result between a value indicated by the different-class spectral similarity information and a predetermined first different-class spectral similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. Claim 3 also recites and a second training comparison result between a value indicated by the different-class data similarity information and a predetermined first different-class data similarity threshold.. Under the broadest reasonable interpretation, the limitations recite generating data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 3 does not solve the deficiencies of claim 1. Regarding claim 4, it is dependent upon claim 3 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 4 recites wherein the step (b1) includes generating, as the second explanatory information, information indicating at least one of a fact that the training data includes inappropriate incomplete data or a fact that information about the training data is insufficient as information necessary for class discrimination, when the value indicated by the different-class spectral similarity information is the predetermined first different-class spectral similarity threshold or greater, and the value indicated by the different-class data similarity information is the predetermined first different-class data similarity threshold or greater. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 4 does not solve the deficiencies of claim 3. Regarding claim 5, it is dependent upon claim 3 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 5 recites wherein the step (b1) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the evaluation data, when the value indicated by the different-class spectral similarity information is the predetermined first different-class spectral similarity threshold or greater and the value indicated by the different-class data similarity information is less than the predetermined first different-class data similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 5 does not solve the deficiencies of claim 3. Regarding claim 6, it is dependent upon claim 3 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 6 recites wherein when the plurality of pieces of training data are used as the evaluation data in the step (a), the step (b) includes a step of (b2) generating, as the second explanatory information, at least one of first training evaluation information indicating that a variation in the input data included in the plurality of pieces of training data used as the evaluation data is large, or second training evaluation information indicating that the input data included in the plurality of pieces of training data used as the evaluation data includes outlier data, when a value indicated by the self-class spectral similarity information is less than a predetermined first self-class spectral similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 6 does not solve the deficiencies of claim 3. Regarding claim 7, it is dependent upon claim 6 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 7 recites wherein the step (b2) includes generating the first training evaluation information as the second explanatory information when a number of the plurality of pieces of the training data as the evaluation data satisfying that the value indicated by the self-class spectral similarity information is less than the predetermined first self-class spectral similarity threshold is a predetermined first data threshold or greater, and generating the second training evaluation information as the second explanatory information when the number of the plurality of pieces of the training data as the evaluation data satisfying that the value indicated by the self-class spectral similarity information is less than the predetermined first self-class spectral similarity threshold is less than the predetermined first data threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a certain number of samples meet a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 7 does not solve the deficiencies of claim 6. Regarding claim 8, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 8 recites wherein when the verification data is used as the evaluation data in the step (a), the step (b) includes a step of (b3) generating the second explanatory information using a first verification comparison result between the value indicated by the different-class spectral similarity information and a predetermined second different-class spectral similarity threshold,. Under the broadest reasonable interpretation, the limitations recite generating data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. Claim 8 also recites and a second verification comparison result between the value indicated by the different-class data similarity information and a predetermined second different-class data similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 8 does not solve the deficiencies of claim 1. Regarding claim 9, it is dependent upon claim 8 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 9 recites wherein the step (b3) includes generating, as the second explanatory information, information indicating at least one of a fact that the verification data includes inappropriate incomplete data or a fact that information about the verification data is insufficient as information necessary for class discrimination, when the value indicated by the different-class spectral similarity information is the predetermined second different-class spectral similarity threshold or greater, and the value indicated by the different-class data similarity information is the predetermined second different-class data similarity threshold or greater. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 9 does not solve the deficiencies of claim 8. Regarding claim 10, it is dependent upon claim 8 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 10 recites wherein the step (b3) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the evaluation data, when the value indicated by the different-class spectral similarity information is the predetermined second different-class spectral similarity threshold or greater, and the value indicated by the different-class data similarity information is less than the predetermined second different-class data similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a difference level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 10 does not solve the deficiencies of claim 8. Regarding claim 11, it is dependent upon claim 8 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 11 recites wherein when a plurality of pieces of the verification data are used as the evaluation data in the step (a), the step (b) includes a step of (b4) generating, as the second explanatory information, at least one of first verification evaluation information indicating that a feature difference between the input data included in the plurality of pieces of the verification data and the input data included in the training data is large, or second verification evaluation information indicating that a plurality of pieces of the input data included in the plurality of pieces of the verification data include outlier data, when the value indicated by the self-class spectral similarity information is less than a predetermined second self-class spectral similarity threshold, and the value indicated by the self-class data similarity information is less than a predetermined self-class data similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 11 does not solve the deficiencies of claim 8. Regarding claim 12, it is dependent upon claim 11 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 12 recites wherein the step (b4) includes generating the first verification evaluation information as the second explanatory information when a number of the plurality of pieces of the verification data as the evaluation data satisfying that the value indicated by the self-class spectral similarity information is less than the predetermined second self-class spectral similarity threshold, and the value indicated by the self-class data similarity information is less than the predetermined self-class data similarity threshold is a predetermined second data threshold or greater, and generating the second verification evaluation information as the second explanatory information when the number of the plurality of pieces of the verification data as the evaluation data satisfying that the value indicated by the self-class spectral similarity information is less than the predetermined second self-class spectral similarity threshold, and the value indicated by the self-class data similarity information is less than the predetermined self-class data similarity threshold is less than the predetermined second data threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a certain number of samples meet a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 12 does not solve the deficiencies of claim 11. Regarding claim 13, it is dependent upon claim 11 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 13 recites wherein the step (b3) includes generating, as the second explanatory information, information indicating that over-training of the machine learning model occurs when the value indicated by the self-class spectral similarity information is less than the predetermined second self-class spectral similarity threshold, and the value indicated by the self-class data similarity information is the predetermined self-class data similarity threshold or greater. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 13 does not solve the deficiencies of claim 11. Regarding claim 14, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 14 recites wherein in the step (a4), the self-class spectral similarity information includes information about at least one of a representative value of a distribution of the plurality of self-class spectral similarities or the self-class maximum spectral similarity, and the different-class spectral similarity information includes information about at least one of a representative value of a distribution of the plurality of different-class spectral similarities or the different-class maximum spectral similarity. Under the broadest reasonable interpretation, the limitations recite generating distributions for class similarities or class differences of multiple samples which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 14 does not solve the deficiencies of claim 1. Regarding claim 15, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 15 recites wherein when abnormal data is used as the evaluation data in the step (a), the abnormal data being not associated with the prior label and being assumed to be classified as an unknown class different from a class corresponding to the prior label, the step (a2) includes specifying a maximum spectral similarity of a maximum value of a plurality of spectral similarities obtained for each of the plurality of known feature spectra,. Under the broadest reasonable interpretation, the limitations recite determining a maximum class similarity for a sample which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. Claim 15 also recites the step (a3) includes obtaining a maximum data similarity that is a similarity between the input data associated with the known feature spectrum that is a calculation source of the maximum spectral similarity specified in the step (a2) and the abnormal data,. Under the broadest reasonable interpretation, the limitations recite determining a maximum class similarity for a sample which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Claim 15 also recites and the step (a4) includes generating the first explanatory information including the spectral similarity information related to the spectral similarity, and the maximum data similarity. Under the broadest reasonable interpretation, the limitations recite steps of mere data outputting, which has been recognized by the courts as being well-understood, routine, and conventional functions. Specifically, the courts have recognized computer functions directed to mere data outputting as well-understood, routine, and conventional functions when they are claimed in a merely generic manner or as insignificant extra-solution activity (MPEP 2106.05(g)). Therefore, claim 15 does not solve the deficiencies of claim 1. Regarding claim 16, it is dependent upon claim 15 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 16 recites wherein when the abnormal data is used as the evaluation data in the step (a), the step (b) includes generating, as the second explanatory information, information indicating that there is a possibility that the machine learning model lacks a capability of correctly performing class discrimination of the abnormal data, when the maximum spectral similarity is a predetermined abnormal spectrum threshold or greater, and the maximum data similarity is less than a predetermined abnormal data similarity threshold. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 16 does not solve the deficiencies of claim 15. Regarding claim 17, it is dependent upon claim 15 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 17 recites wherein when the abnormal data is used as the evaluation data in the step (a), the step (b) includes generating, as the second explanatory information, information indicating that information about the abnormal data is insufficient as information necessary for class discrimination, when the maximum spectral similarity is a predetermined abnormal spectrum threshold or greater, and the maximum data similarity is a predetermined abnormal data similarity threshold or greater. Under the broadest reasonable interpretation, the limitations recite generating conclusions for data based on whether a sample meets a similarity level to a class which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 17 does not solve the deficiencies of claim 15. Regarding claim 18, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites An evaluation device for a trained machine learning model,. The claim recites a device which is interpreted as a machine. A machine is one of the four statutory categories of invention. For the Step 2A/2B analyses, since claim 18 is similar to claim 1 it is rejected under the same rationales as claim 1. The additional limitation below fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. the evaluation device comprising: a memory configured to store the trained machine learning model…and one or more processors, wherein the one or more processors is configured to execute: (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 19, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites A non-transitory computer-readable storage medium. The claim recites a computer storage medium which is interpreted as an article of manufacture. An article of manufacture is one of the four statutory categories of invention. For the Step 2A/2B analyses, since claim 19 is similar to claim 1 it is rejected under the same rationales as claim 1. The additional limitation below fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. A non-transitory computer-readable storage medium storing a program for causing one or more computers to execute… the program causing the one or more computers to execute functions of: (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Allowable Subject Matter Claims 1 and 3-19 are allowable over the prior art would be allowable if rewritten or amended to overcome the rejections under 35 U.S.C. 101 and 35 U.S.C. 112(b) set forth in this Office Action. The following is a statement of reasons for indication of allowable subject matter: Below are the closest cited references, each of which disclose various aspects of claim 1: Pai, et al., US 20200279140 A1 discloses using prototypes to compare prior training data with new testing data to determine a spectral similarity between data points by using local explanation scores. Pai does not explicitly teach an additional similarity metric and thus is silent on the additional similarity metrics in amended claim 1. Soleimani, et al., US 20210117863 A1 discloses using a combination of a spectral similarity with a data similarity to generate an explanatory variable. Soleimani does not explicitly teach a maximum of either the spectral or data similarities or the self-class and different-class similarities. Fu, et al., CN 113298725 B discloses maximum inter/intra class similarities but does not explicitly teach two different similarities (spectral and data) and also does not teach an explanatory variable that is a combination of the 4 similarity metrics in amended claim 1. Ge, et al., US 20210089824 A1 discloses calculating max intra and inter class similarities between multiple samples. Ge does not explicitly teach using two different types of similarities or an explanatory variable based on all 4 of the similarity metrics in amended claim 1. Bin Tariq, et al., US 20220391631 A1 discloses learning explanatory variables for a machine learning model by leveraging multiple data similarities. Bin Tariq does not explicitly teach using a spectral similarity, using two different types of similarities, or an explanatory variable based on all 4 of the similarity metrics in amended claim 1. While the above prior arts disclose the aforementioned concepts, however, none of the prior arts, individually or in reasonable combination, discloses all the limitations in the manner recited in independent claim 1. Therefore, in view of the amended claim 1, applicant response and further search, claim 1 is allowable over prior art since the prior art taken individually or in combination fails to particularly disclose, fairly suggest, or render obvious the following limitation: “when at least one of (i) the plurality of pieces of training data or (ii) verification data is used as the evaluation data in the step (a), the verification data being not used for the training of the machine learning model and including the input data and the prior label associated with the input data, the step (a2) includes: obtaining a self-class spectral similarity that is the spectral similarity between the feature spectrum and a self-class known feature spectrum of a same class as an evaluation class indicated by the prior label associated with the evaluation data among the known feature spectrum group, for each self-class known feature spectrum of a plurality of self-class known feature spectra; and obtaining a different-class spectral similarity that is the spectral similarity between the feature spectrum and a different-class known feature spectrum of a class different from the evaluation class among the known feature spectrum group, for each different-class known feature spectrum of a plurality of different-class known feature spectra; the step (a3) includes: obtaining a self-class maximum data similarity that is the similarity between the input data associated with the self-class known feature spectrum that is a calculation source of a self-class maximum spectral similarity indicating a maximum value of a plurality of self-class spectral similarities, and the evaluation data; and obtaining a different-class maximum data similarity that is the similarity between the input data associated with the different-class known feature spectrum that is a calculation source of a different-class maximum spectral similarity indicating a maximum value of a plurality of different-class spectral similarities, and the evaluation data; and the step (a4) includes generating the first explanatory information including self-class spectral similarity information related to the self-class maximum spectral similarity, self-class data similarity information related to the self-class maximum data similarity, different-class spectral similarity information related to the different-class maximum spectral similarity, and different- class data similarity information related to the different-class maximum data similarity.” Therefore, the above limitation renders the above independent claim allowable over prior art. Thus, independent claim 1 is considered allowable over prior art, and the dependent claims 3-17 are also considered allowable over the prior art at least by virtue of their dependence. Regarding claims 18 and 19, these claims are similarly amended with the allowable subject matter of claim 1. Therefore claims 18 and 19 are also allowable over the art for the same reasons as claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 09, 2022
Application Filed
Nov 25, 2025
Non-Final Rejection — §101, §112
Feb 27, 2026
Response Filed
Apr 01, 2026
Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month