Prosecution Insights
Last updated: April 19, 2026
Application No. 17/820,711

METHOD OF PERFORMING CLASSIFICATION PROCESSING USING MACHINE LEARNING MODEL, INFORMATION PROCESSING DEVICE, AND COMPUTER PROGRAM

Final Rejection §101§103
Filed
Aug 18, 2022
Examiner
THOMPSON, KYLE ALLMAN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Seiko Epson Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
22 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
40.5%
+0.5% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/25/2025 and 12/15/2025 are in compliance with provisions 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments with respect to the rejection of the claims 35 U.S.C. 101 have been fully considered but they are not persuasive: With respect to 101: Applicant argues: Applicant argues that the amended limitations cannot be abstract ideas, as stated on pages 10 and 11 of the remarks. The Applicant also argues that the claims allegedly provide an improvement to the technological field of machine learning, as stated on pages 12 – 15 of the remarks. Examiner’s answer: Regarding the Applicant’s statement that the amended limitations cannot be directed to an abstract idea, specifically the “accommodating a printing medium” as recited on page 11 of the remarks, the Examiner agrees that this is not an abstract idea but instead mere instructions to apply an exception as it recites only the idea of a solution or outcome (MPEP § 2106.05(f)). However, regarding the Applicant’s assertion that the claim as a whole is directed to an improvement in machine learning, the Examiner respectfully disagrees. On page 14 of Applicant’s remarks, paragraphs 18 and 85 of the Applicant’s specification merely repeat the claimed functions without explaining how to implement the invention as required by MPEP § 2106.05(a). The recited paragraphs in the specification do not describe the invention in sufficient detail that an ordinary artisan would recognize the claimed invention as providing an improvement. Thus, the judicial exceptions are not integrated into a practical application. With respect to 103: Applicant’s arguments with respect to claims 1 – 8 have been considered but are moot because the new ground of rejection does not rely on any refence applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The following sections following the 2019 PEG guidelines for analyzing subject matter eligibility. The analysis below of the claims’ subject matter eligibility follows the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”) and the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, 89 Fed. Reg. 58128-58138 (July 17, 2024) (“2024 AI SME Update”). When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, 1.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Claim 1 Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) computing, a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the spectral data is input into the selected machine learning model (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) determining a class for the spectral data using the similarity, wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: in a printer: accommodating a printing medium; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) when a plurality of pieces of training data are input into the N machine learning models (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. in a printer: accommodating a printing medium; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) when a plurality of pieces of training data are input into the N machine learning models (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to a method of classification, this is nothing more than using machine learning models to classify the provided data. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exception of claim 1 are incorporated. (1) selecting one machine learning model from among the N machine learning models as the selected machine learning model; (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) (3) when the spectral data is not determined to belong to a known class in step (2), returning to step (1) and selecting a next machine learning model to perform the step (2); (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) determining that the spectral data belongs to an unknown class. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: (2) computing the similarity using the selected machine learning model to determine the class for the spectral data using the similarity; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) (4) when a result of the classification processing using all the N machine learning models indicates that the spectral data does not belong to any known class, (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. (2) computing the similarity using the selected machine learning model to determine the class for the spectral data using the similarity; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) (4) when a result of the classification processing using all the N machine learning models indicates that the spectral data does not belong to any known class, (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 3 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. of the N machine learning models, (N - 1) machine learning models include a number of classes equal to the upper limit value (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein an upper limit value is set to a number of classes into which classification is performed using any one machine learning model from among the N machine learning models (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) one specific machine learning model includes a number of classes equal to or less than the upper limit value (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) when the classification processing is performed on the spectral data using the N machine learning models and the spectral data is determined to belong to an unknown class, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) the determining of the class includes: (1) when the one specific machine learning model includes the number of classes less than the upper limit value (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) performing training of the one specific machine learning model using training data including the spectral data, to add a new class for the spectral data; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) (2) when the one specific machine learning model includes the number of classes equal to the upper limit value, (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) adding a new machine learning model including the class that corresponds to the spectral data. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein an upper limit value is set to a number of classes into which classification is performed using any one machine learning model from among the N machine learning models (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) one specific machine learning model includes a number of classes equal to or less than the upper limit value (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) when the classification processing is performed on the spectral data using the N machine learning models and the spectral data is determined to belong to an unknown class, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) the determining of the class includes: (1) when the one specific machine learning model includes the number of classes less than the upper limit value (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) performing training of the one specific machine learning model using training data including the spectral data, to add a new class for the spectral data; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) (2) when the one specific machine learning model includes the number of classes equal to the upper limit value, (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) adding a new machine learning model including the class that corresponds to the spectral data. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 4 incorporates the rejection of claim 3. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 3 are incorporated. Please see the analysis of claim 3 above. Regarding the method steps recited in claim 3, these steps cover mental processes based on generate variable selection weights. Therefore, claim 4 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgement/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: performing training of the new machine learning model using the training data including the spectral data, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) wherein the training data further includes existing training data used to perform training concerning at least one class included in the N machine learning models. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. performing training of the new machine learning model using the training data including the spectral data, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) wherein the training data further includes existing training data used to perform training concerning at least one class included in the N machine learning models. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 5 incorporates the rejection of claim 3. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 3 are incorporated. Please see the analysis of claim 3 above. Regarding the method steps recited in claim 3, these steps cover mental processes based on generate variable selection weights. Therefore, claim 5 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgement/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the specific layer is configured such that a vector neuron disposed at a plane defined by two axes of a first axis and a second axis is disposed across a plurality of channels along a third axis extending in a direction differing from the two axes (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) the feature vector is a feature spectrum in which an activation value, corresponding to a vector length of an output vector of vector neuron, at a planar position of the specific layer is arrayed across the plurality of channels along the third axis. (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the specific layer is configured such that a vector neuron disposed at a plane defined by two axes of a first axis and a second axis is disposed across a plurality of channels along a third axis extending in a direction differing from the two axes (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) the feature vector is a feature spectrum in which an activation value, corresponding to a vector length of an output vector of vector neuron, at a planar position of the specific layer is arrayed across the plurality of channels along the third axis. (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 6 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. receiving an instruction indicating that one known class of the plurality of classes is set to a delete target class (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: in a specific machine learning model of the N machine learning models including the delete target class, changing an output name of the delete target class into a name indicating that the delete target class is deleted or unknown, or deleting one channel from an output layer of the machine learning model including the delete target class to restructure the machine learning model, and performing training of the restructured machine learning model. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. in a specific machine learning model of the N machine learning models including the delete target class, changing an output name of the delete target class into a name indicating that the delete target class is deleted or unknown, or deleting one channel from an output layer of the machine learning model including the delete target class to restructure the machine learning model, and performing training of the restructured machine learning model. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 7 Step 1: The claim recites a printer, which is a machine and one of the four statutory categories of eligible matter. Step 2A Prong 1: preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) processing…a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the spectral data is input into the selected machine learning model (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) determining a class for the spectral data using the similarity, wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: a memory configured to store the machine learning model; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a printing medium holder configured to accommodate a printing medium; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a spectrum analyzer configured to perform spectrum analysis of the printing medium to acquire the spectral data; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) one or more processors configured to execute computation using the machine learning model, wherein the one or more processors perform: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of, when a plurality of pieces of training data are input into the N machine learning models (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) processing of computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. a memory configured to store the machine learning model; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a printing medium holder configured to accommodate a printing medium; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a spectrum analyzer configured to perform spectrum analysis of the printing medium to acquire the spectral data; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) one or more processors configured to execute computation using the machine learning model, wherein the one or more processors perform: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of, when a plurality of pieces of training data are input into the N machine learning models, (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) processing of computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to software per se of classification, this is nothing more than using machine learning models to classify the provided data. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 8 Step 1: The claim recites a non-transitory computer-readable storage medium, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) processing…a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the spectral data is input into the selected machine learning model (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) determining a class for the spectral data using the similarity, wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of, when a plurality of pieces of training data are input into the N machine learning models, (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) processing of computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) wherein the spectral data is acquired based on spectrum analysis on a printing medium (Mere data gathering, Insignificant extra solution activity in MPEP § 2106.05(g)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) processing of, when a plurality of pieces of training data are input into the N machine learning models, (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) processing of computing, using a selected machine learning model selected from the N machine learning models (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) wherein the spectral data is acquired based on spectrum analysis on a printing medium (receiving or transmitting data, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine, and conventional activities in the field of computer functions (See MPEP § 2106.05(d)(II)(i)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to method of classification, this is nothing more than using machine learning models to classify the provided data. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 7 and 8 are rejected under 35 U.S.C 103 as being unpatentable over Suzuki (US 20210280169 A1) in view of Yang (US 10387740 B2) further in view of Eiyama (US 20190260902 A1) Regarding claim 1, Suzuki teaches preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (See e.g. [0050], In embodiments, the supplying unit 425 supplies the selected utterance to the embedding model 430 as the training input of the machine learning model…The supplying unit 425 supplies the selected identification of the speaker as the training output data if the classification model 435 [classify input data] performs classification by speaker (e.g., the machine learning model 100a).) (See e.g. [0065], In embodiments, the structures of the classification models 620 are different if the number of speakers or acoustic conditions to be identified (N1, . . . , N2, . . . in FIG. 6) is different because the number of the output nodes is determined based on the number of speakers or the number of acoustic conditions. [at least one class differing from other machine learning models]) (See e.g. [0059], the apparatus, such as the apparatus 400, prepares two or more machine learning models [where N is an integer equal to or more than 2] for performing classification by speaker.) when a plurality of pieces of training data are input into the N machine learning models, (See e.g. [0038], For each task generated by the task generator 420, the supplying unit 425 supplies each utterance included in the corresponding subset of training data as training input data [training data are input into] of the embedding model 430. [machine learning models]) computing, using a selected machine learning model selected from the N machine learning models (See e.g. [0003], According to an embodiment of the present invention, a computer-implemented method includes obtaining training data including a plurality of utterances of a plurality of speakers in a plurality of acoustic conditions) (See e.g. [0048], The classification model in the machine learning model of the selected task is selected as the classification model 435. In this manner, the classification model 435 works as the classification model 120a if the first task is selected and the classification model 435 works as the classification model 120b if the second task is selected.) Suzuki does not teach preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; computing, a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the spectral data is input into the [selected] machine learning model Yang teaches preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; (See e.g. [C12:L59 – 65], Input data (e.g., sub-region of the 2-D imagery data with three basic color channels) 2210 is processed through a number of ordered convolutional layers 2225 in a CNN based integrated circuit (e.g., integrated circuit 100 of FIG. 1) 2220. The output 2230 of the last layer of the ordered convolutional layers contains P channels or feature maps with F×F pixels of data per feature map.) (See e.g. [C13:L23 – 32], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. [obtained from output of at least one specific layer of the plurality of vector neuron layers] A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects [preparing a known feature vector group] in a local database 2460 in an object recognition task…For those sub-region with confirmed detection of an object, the CNN based IC is then loaded with reduced convolutional output channels to extract features for comparison with a local database for object recognition.) computing, a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the spectral data is input into the [selected] machine learning model (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known [between the known feature vector group and a feature vector obtained from output of a specific layer] objects in a local database 2460 in an object recognition task.) determining a class for the [spectral] data using the similarity, wherein the determining of the class for the [spectral] data corresponds to performing the classification processing [on the spectral data to determine a type of the printing medium] (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects in a local database 2460 in an object recognition task. [determining a class for the [spectral] data using the similarity]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki and Yang before them, to include Yang’s feature vectors which would allow Suzuki’s multiple machine learning models to process and manipulate feature vectors. One would have been motivated to make such a combination in order to reduce slowness of the software and reduced computation resources, as suggested by Yang (US 10387740 B2) (C1:L52 – 60) Suzuki and Yang do not teach accommodating a printing medium; performing, by a spectral analyzer in the printer, spectrum analysis of the printing medium to acquire the spectral data; wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. Eiyama teaches accommodating a printing medium; (See e.g. [0002], Printing on a printing medium by an inkjet printing method is widely used because the apparatus arrangement is relatively inexpensive, and color stability is high.) performing, by a spectral analyzer in the printer, spectrum analysis of the printing medium to acquire the spectral data; (See e.g. [0002], Printing on a printing medium by an inkjet printing method is widely used because the apparatus arrangement is relatively inexpensive, and color stability is high.) (See e.g. [0024], A sensor 107 [spectral analyzer] is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [acquire the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105 [spectrum analysis of the printing medium], the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) [determining a class for the spectral data using the similarity], wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 [determine a type of the printing medium.] and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang and Eiyama before them, to include Eiyama’s spectral analyzer which would allow Suzuki and Yang’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Regarding claim 7, Suzuki teaches a memory configured to store the machine learning model; (See e.g. [0076], The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device…A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM)) one or more processors configured to execute computation using the machine learning model, (See e.g. [0080], These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine) wherein the one or more processors perform: processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (See e.g. [0050], In embodiments, the supplying unit 425 supplies the selected utterance to the embedding model 430 as the training input of the machine learning model…The supplying unit 425 supplies the selected identification of the speaker as the training output data if the classification model 435 [to classify input data] performs classification by speaker (e.g., the machine learning model 100a)) (See e.g. [0065], In embodiments, the structures of the classification models 620 are different if the number of speakers or acoustic conditions to be identified (N1, . . . , N2, . . . in FIG. 6) is different because the number of the output nodes is determined based on the number of speakers or the number of acoustic conditions. [at least one class differing from other machine learning models]) (See e.g. [0059], the apparatus, such as the apparatus 400, prepares two or more machine learning models [where N is an integer equal to or more than 2] for performing classification by speaker.) processing of, when a plurality of pieces of training data are input into the N machine learning models, (See e.g. [0038], For each task generated by the task generator 420, the supplying unit 425 supplies each utterance included in the corresponding subset of training data as training input data [training data are input into] of the embedding model 430. [machine learning models]) processing of computing, using a selected machine learning model selected from the N machine learning models (See e.g. [0003], According to an embodiment of the present invention, a computer-implemented method includes obtaining training data including a plurality of utterances of a plurality of speakers in a plurality of acoustic conditions) (See e.g. [0048], The classification model in the machine learning model of the selected task is selected as the classification model 435. In this manner, the classification model 435 works as the classification model 120a if the first task is selected and the classification model 435 works as the classification model 120b if the second task is selected.) Suzuki does not teach preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; processing of computing, a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the [spectral] data is input into the selected machine learning model Yang teaches preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; ; (See e.g. [C12:L59 – 65], Input data (e.g., sub-region of the 2-D imagery data with three basic color channels) 2210 is processed through a number of ordered convolutional layers 2225 in a CNN based integrated circuit (e.g., integrated circuit 100 of FIG. 1) 2220. The output 2230 of the last layer of the ordered convolutional layers contains P channels or feature maps with F×F pixels of data per feature map.) (See e.g. [C13:L23 – 32], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. [obtained from output of at least one specific layer of the plurality of vector neuron layers] A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects [preparing a known feature vector group] in a local database 2460 in an object recognition task…For those sub-region with confirmed detection of an object, the CNN based IC is then loaded with reduced convolutional output channels to extract features for comparison with a local database for object recognition.) processing of computing, a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the [spectral] data is input into the selected machine learning model (See e.g. [C13:L23 – 32], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. [obtained from output of at least one specific layer of the plurality of vector neuron layers] A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known [between the known feature vector group and a feature vector obtained from output of a specific layer] objects in a local database 2460 in an object recognition task. Once a match has been found, the object 2480 is recognized. Local database 2460 is located or included within the deep learning objection detection and recognition system 2400.) determining a class for the [spectral] data using the similarity, wherein the determining of the class for the [spectral] data corresponds to performing the classification processing [on the spectral data to determine a type of the printing medium] (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects in a local database 2460 in an object recognition task. [determining a class for the [spectral] data using the similarity]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki and Yang before them, to include Yang’s feature vectors which would allow Suzuki’s multiple machine learning models to process and manipulate feature vectors. One would have been motivated to make such a combination in order to reduce slowness of the software and reduced computation resources, as suggested by Yang (US 10387740 B2) (C1:L52 – 60) Suzuki and Yang do not teach a printing medium holder configured to accommodate a printing medium; wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. a printing medium holder configured to accommodate a printing medium; (See e.g. [0002], Printing on a printing medium by an inkjet printing method is widely used because the apparatus arrangement is relatively inexpensive, and color stability is high.) (See e.g. [0025], The image of a patch or the like printed by a printhead 212 can optically be read by the sensor 107 while conveying the printing medium [a printing medium holder] in a reverse direction.) a spectrum analyzer configured to perform spectrum analysis of the printing medium to acquire the spectral data; (See e.g. [0002], Printing on a printing medium by an inkjet printing method is widely used because the apparatus arrangement is relatively inexpensive, and color stability is high.) (See e.g. [0024], A sensor 107 [spectral analyzer] is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [acquire the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105 [spectrum analysis of the printing medium], the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) [determining a class for the spectral data using the similarity], wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 [determine a type of the printing medium.] and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang and Eiyama before them, to include Eiyama’s spectral analyzer which would allow Suzuki and Yang’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Regarding claim 8, Suzuki teaches A non-transitory computer-readable storage medium storing a computer program, the computer program being configured to cause one or more processors to perform [classification processing on classification target spectral data] using a machine learning model [including a vector neural network including a plurality of vector neuron layers], the computer program being configured to cause the one or more processors to perform: (See e.g. [0076], A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se [A non-transitory computer-readable storage medium storing a computer program]) (See e.g. [0004], a computer program product including one or more computer-readable storage mediums collectively storing program instructions that are executable by a processor or programmable circuitry [the computer program being configured to cause the one or more processors to perform] to cause the processor or programmable circuitry to perform operations includes obtaining training data including a plurality of utterances of a plurality of speakers in a plurality of acoustic conditions) processing of preparing N machine learning models, each of the N machine learning models being configured to classify input data into one of a plurality of classes, each of the N machine learning models being configured to include at least one class differing from other machine learning models of the N machine learning models, where N is an integer equal to or more than 2; (See e.g. [0050], In embodiments, the supplying unit 425 supplies the selected utterance to the embedding model 430 as the training input of the machine learning model…The supplying unit 425 supplies the selected identification of the speaker as the training output data if the classification model 435 [classify input data] performs classification by speaker (e.g., the machine learning model 100a).) (See e.g. [0065], In embodiments, the structures of the classification models 620 are different if the number of speakers or acoustic conditions to be identified (N1, . . . , N2, . . . in FIG. 6) is different because the number of the output nodes is determined based on the number of speakers or the number of acoustic conditions. [at least one class differing from other machine learning models]) (See e.g. [0059], the apparatus, such as the apparatus 400, prepares two or more machine learning models [where N is an integer equal to or more than 2] for performing classification by speaker.) processing of, when a plurality of pieces of training data are input into the N machine learning models, (See e.g. [0038], For each task generated by the task generator 420, the supplying unit 425 supplies each utterance included in the corresponding subset of training data as training input data [training data are input into] of the embedding model 430. [machine learning models]) processing of computing, using a selected machine learning model selected from the N machine learning models (See e.g. [0003], According to an embodiment of the present invention, a computer-implemented method includes obtaining training data including a plurality of utterances of a plurality of speakers in a plurality of acoustic conditions) (See e.g. [0048], The classification model in the machine learning model of the selected task is selected as the classification model 435. In this manner, the classification model 435 works as the classification model 120a if the first task is selected and the classification model 435 works as the classification model 120b if the second task is selected.) Suzuki does not teach preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the [spectral] data is input into the [selected] machine learning model Yang teaches preparing a known feature vector group obtained from output of at least one specific layer of the plurality of vector neuron layers; ; (See e.g. [C12:L59 – 65], Input data (e.g., sub-region of the 2-D imagery data with three basic color channels) 2210 is processed through a number of ordered convolutional layers 2225 in a CNN based integrated circuit (e.g., integrated circuit 100 of FIG. 1) 2220. The output 2230 of the last layer of the ordered convolutional layers contains P channels or feature maps with F×F pixels of data per feature map.) (See e.g. [C13:L23 – 32], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. [obtained from output of at least one specific layer of the plurality of vector neuron layers] A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects [preparing a known feature vector group] in a local database 2460 in an object recognition task…For those sub-region with confirmed detection of an object, the CNN based IC is then loaded with reduced convolutional output channels to extract features for comparison with a local database for object recognition.) a similarity, for each class, between the known feature vector group and a feature vector obtained from output of a specific layer of the plurality of vector neuron layers when the [spectral] data is input into the [selected] machine learning model (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known [between the known feature vector group and a feature vector obtained from output of a specific layer] objects in a local database 2460 in an object recognition task.) determining a class for the [spectral] data using the similarity, wherein the determining of the class for the [spectral] data corresponds to performing the classification processing [on the spectral data to determine a type of the printing medium] (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects in a local database 2460 in an object recognition task. [determining a class for the [spectral] data using the similarity]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki and Yang before them, to include Yang’s feature vectors which would allow Suzuki’s multiple machine learning models to process and manipulate feature vectors. One would have been motivated to make such a combination in order to reduce slowness of the software and reduced computation resources, as suggested by Yang (US 10387740 B2) (C1:L52 – 60) Suzuki and Yang do not teach wherein the spectral data is acquired based on spectrum analysis on a printing medium, and wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. Eiyama teaches wherein the spectral data is acquired based on spectrum analysis on a printing medium, and [determining a class for the spectral data using the similarity]; (See e.g. [0024], A sensor 107 [spectral analyzer] is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [acquire the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105 [spectrum analysis of the printing medium], the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) wherein the determining of the class for the spectral data corresponds to performing the classification processing on the spectral data to determine a type of the printing medium. (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 [determine a type of the printing medium.] and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang and Eiyama before them, to include Eiyama’s spectral analyzer which would allow Suzuki and Yang’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Claims 2 – 4 are rejected under 35 U.S.C 103 as being unpatentable over Suzuki (US 20210280169 A1) in view of Yang (US 10387740 B2) further in view of Eiyama (US 20190260902 A1) further in view of KANO (US 20190360942 A1) Regarding claim 2, Suzuki, Yang and Eiyama, Suzuki teaches the method of claim 1, (1) selecting one machine learning model from among the N machine learning models as the selected machine learning model; (See e.g. [0048], The classification model in the machine learning model of the selected task is selected as the classification model 435. In this manner, the classification model 435 works as the classification model 120a if the first task is selected and the classification model 435 works as the classification model 120b if the second task is selected.) Suzuki does not teach (2) computing the similarity using the selected machine learning model to determine the class for the spectral data using the similarity; Yang teaches (2) computing the similarity using the selected machine learning model to determine the class for the [spectral] data using the similarity; (See e.g. [C13:L23 – 35], The output of the last layer of the multiple ordered convolutional layers is a feature map with F×F pixels of data 2410, which can be represented as a feature vector 2420 with F×F entries or numbers. A two-category classification [determine the class for the data] task 2430 is then performed using the feature vector 2420 to determine whether an object is detected in the corresponding sub-region. If an object is detected (shown as “YES”), the feature vector 2420 is compared with stored feature vectors of known objects in a local database 2460 in an object recognition task. Once a match has been found, the object 2480 is recognized. [computing the similarity] Local database 2460 is located or included within the deep learning objection detection and recognition system 2400.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki and Yang before them, to include Yang’s feature vectors which would allow Suzuki’s multiple machine learning models to process and manipulate feature vectors. One would have been motivated to make such a combination in order to reduce slowness of the software and reduced computation resources, as suggested by Yang (US 10387740 B2) (C1:L52 – 60) Suzuki and Yang do not teach when the classification target spectral data is not determined to belong to a known class in step (2), returning to step (1) and selecting a next machine learning model to perform the step (2); (4) when a result of the classification processing using all the N machine learning models indicates that the [spectral] data does not belong to any known class, determining that the [spectral] data belongs to an unknown class. KANO teaches when the classification target spectral data is not determined to belong to a known class in step (2), returning to step (1) and selecting a next machine learning model to perform the step (2); (See e.g. [0037], When no data is stored in the array a [ ] (i.e., when the answer is “NO” in step S27), the procedure goes to the next step (i.e., step S29). In step S29, the calculator 22 inputs the product data Da (which is to be processed) to an arithmetic algorithm that uses the non-defective product learning model generated by the learner 21 [selecting a next machine learning model to perform the step (2)]…When the result of the comparison indicates that the likelihood p0 of a non-defective product is equal to or greater than the threshold value t0 (i.e., when the answer is “YES” in step S30), the determiner 23 determines in step S31 that the target product data Da is non-defective product data. [target spectral data is not determined to belong to a known class in step]) (4) when a result of the classification processing using all the N machine learning models indicates that the [spectral] data does not belong to any known class, determining that the [spectral] data belongs to an unknown class. (See e.g. [0054], The invention makes it possible to, when target product data includes data on a defective product having an unknown defect, determine that this data is data on a defective product having an unknown defect.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and Kano before them, to include Kano’s classifying unknown data with machine learning models which would allow Suzuki, Yang and Eiyama’s model to process data that is unknown to the current machine learning model. One would have been motivated to make such a combination in order to be able to determine the likelihood of defective data or unknown data from input data, as suggested by Kano (US 20190360942 A1) (0051 - 0054) Suzuki, Yang and KANO do not teach spectral data Eiyama teaches spectral data (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and KANO before them, to include Eiyama’s spectral analyzer which would allow Suzuki, Yang and KANO’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Regarding claim 3, Suzuki, Yang and Eiyama teach the method of claim 1. Suzuki and Yang do not teach wherein an upper limit value is set to a number of classes into which classification is performed using any one machine learning model from among the N machine learning models, of the N machine learning models, (N - 1) machine learning models include a number of classes equal to the upper limit value, one specific machine learning model includes a number of classes equal to or less than the upper limit value when the classification processing is performed on the [spectral] data using the N machine learning models and the [spectral] data is determined to belong to an unknown class, the determining of the class includes: (1) when the one specific machine learning model includes the number of classes less than the upper limit value, performing training of the one specific machine learning model using training data including the [spectral] data, to add a new class for the [spectral] data; (2) when the one specific machine learning model includes the number of classes equal to the upper limit value, adding a new machine learning model including the class that corresponds to the spectral data. KANO teaches wherein an upper limit value is set to a number of classes into which classification is performed using any one machine learning model from among the N machine learning models, (See e.g. [0030], As used herein, the term “predetermined requirement” refers to a requirement that the likelihoods of a defective product for all defects must each be lower than the associated defective product threshold value as a result of the comparison between the likelihood and the threshold value for each defect type, and the likelihood of a non-defective product must be lower than the non-defective product threshold value. [upper limit value is set to a number of classes]) of the N machine learning models, (N - 1) machine learning models include a number of classes equal to the upper limit value, (See e.g. [0030], As used herein, the term “predetermined requirement” refers to a requirement that the likelihoods of a defective product for all defects must each be lower than the associated defective product threshold value [the upper limit value] as a result of the comparison between the likelihood and the threshold value for each defect type [number of classes]) one specific machine learning model includes a number of classes equal to or less than the upper limit value (See e.g. [0035], The calculator 22 inputs the product data Da (which is to be processed) to an arithmetic algorithm that uses the defective product learning model Mn for the defect n generated by the learner 21 [one specific machine learning model]) (See e.g. [0030], As used herein, the term “predetermined requirement” refers to a requirement that the likelihoods of a defective product for all defects must each be lower than the associated defective product threshold value [the upper limit value] as a result of the comparison between the likelihood and the threshold value for each defect type) when the classification processing is performed on the [spectral] data using the N machine learning models and the [spectral] data is determined to belong to an unknown class, the determining of the class includes: (See e.g. [0022], For example, suppose that the number of known defects, is three (i.e., suppose that the known defects are the defect 1, the defect 2, and the defect 3). In this case, the learner 21 generates three defective product learning models (i.e., a defective product learning model M1, a defective product learning model M2, and a defective product learning model M3). [classification processing is performed on the data using the N machine learning models] When n is an integer equal to or greater than one, the defective product learning model for the defect n will be referred to as a “defective product learning model Mn”.) (See e.g. [0026], In accordance with the results of the individual determining process, the determiner 23 further performs the process of determining whether the target product data corresponds to a non-defective product, corresponds to any one of the known defect 1, . . . , and defect n, or corresponds to an unknown defect. [the data is determined to belong to an unknown class]) (1) when the one specific machine learning model includes the number of classes less than the upper limit value, performing training of the one specific machine learning model using training data including the [spectral] data, to add a new class for the [spectral] data; (See e.g. [0034], When the result of the comparison indicates that the likelihood p1 of the defect 1 is equal to or greater than the threshold value [the number of classes less than the upper limit value] t1 (i.e., when the answer is “YES” in step S22), the determiner 23 stores the difference between the likelihood p1 and the threshold value t1 (which is represented as “p1−t1”) in the array a [ ] in step S23. [training of the one specific machine learning model using training data including the data, to add a new class for the data]) (2) when the one specific machine learning model includes the number of classes equal to the upper limit value, adding a new machine learning model including the class that corresponds to the spectral data. (See e.g. [0034], When the result of the comparison indicates that the likelihood p1 of the defect 1 is equal to [number of classes equal to the upper limit value] or greater than the threshold value t1 (i.e., when the answer is “YES” in step S22), the determiner 23 stores the difference between the likelihood p1 and the threshold value t1 (which is represented as “p1−t1”) in the array a [ ] in step S23. When the result of the comparison indicates that the likelihood p1 of the defect 1 is lower than the threshold value t1 (i.e., when the answer is “NO” in step S22), no data is stored in the array a [ ]. The procedure then goes to the next step (i.e., step S24). [adding a new machine learning model including the class that corresponds to the spectral data.]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and Kano before them, to include Kano’s classifying unknown data with machine learning models which would allow Suzuki, Yang and Eiyama’s model to process data that is unknown to the current machine learning model. One would have been motivated to make such a combination in order to be able to determine the likelihood of defective data or unknown data from input data, as suggested by Kano (US 20190360942 A1) (0051 - 0054) Suzuki, Yang and KANO do not teach spectral data Eiyama teaches spectral data (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and KANO before them, to include Eiyama’s spectral analyzer which would allow Suzuki, Yang and KANO’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Regarding claim 4, Suzuki, Yang, Eiyama and KANO teach the method of claim 3. Suzuki and Yang do not teach performing training of the new machine learning model using the training data including the [spectral] data, wherein the training data further includes existing training data used to perform training concerning at least one class included in the N machine learning models. KANO teaches performing training of the new machine learning model using the training data including the [spectral] data, (See e.g. [0035], The calculator 22 inputs the product data Da (which is to be processed) to an arithmetic algorithm that uses the defective product learning model Mn for the defect n generated by the learner 21 [training of the new machine learning model] , so that the calculator 22 acquires output data from the arithmetic algorithm. In step S24, the calculator 22 calculates, for the product data Da, a likelihood pn of the defect n in, accordance with the output data. The calculator 22 further calculates a difference between the input product data Da [model using the training data including the data] and the output data so as to define this difference as a determination value. The determination value is converted into the likelihood pn of the defect n.) wherein the training data further includes existing training data used to perform training concerning at least one class included in the N machine learning models. (See e.g. [0035], The calculator 22 inputs the product data Da (which is to be processed) to an arithmetic algorithm that uses the defective product learning model Mn for the defect n [concerning at least one class included in the N machine learning models] generated by the learner 21…The calculator 22 further calculates a difference between the input product data Da [existing training data used to perform training] and the output data so as to define this difference as a determination value. The determination value is converted into the likelihood pn of the defect n.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and Kano before them, to include Kano’s classifying unknown data with machine learning models which would allow Suzuki, Yang and Eiyama’s model to process data that is unknown to the current machine learning model. One would have been motivated to make such a combination in order to be able to determine the likelihood of defective data or unknown data from input data, as suggested by Kano (US 20190360942 A1) (0051 - 0054) Suzuki, Yang and KANO do not teach spectral data Eiyama teaches spectral data (See e.g. [0024], A sensor 107 is a read senor including a three-color LED 101 with three, R (red), G (green), and B (blue) light emission colors and one photodiode 102 having a spectral sensitivity characteristic in the visible light range [the spectral data]. To detect a printing medium 105 and the colors of patches printed on the printing medium 105, the sensor 107 detects diffused reflection of light emitted by the light source on the printing medium 105.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and KANO before them, to include Eiyama’s spectral analyzer which would allow Suzuki, Yang and KANO’s model to incorporate collected spectral data. One would have been motivated to make such a combination in order to classify the spectral data on a printed medium, as suggested by Eiyama (US 20190260902 A1) (0007) Claim 5 is rejected under 35 U.S.C 103 as being unpatentable over Suzuki (US 20210280169 A1) in view of Yang (US 10387740 B2) further in view of Eiyama (US 20190260902 A1) further in view of Rassool (US 11275928 B2) Regarding claim 5, Suzuki, Yang and Eiyama teach the method of claim 1. Suzuki, Yang and Eiyama do not teach wherein the specific layer is configured such that a vector neuron disposed at a plane defined by two axes of a first axis and a second axis is disposed across a plurality of channels along a third axis extending in a direction differing from the two axes the feature vector is a feature spectrum in which an activation value, corresponding to a vector length of an output vector of vector neuron, at a planar position of the specific layer is arrayed across the plurality of channels along the third axis. Rassool teaches wherein the specific layer is configured such that a vector neuron disposed at a plane defined by two axes of a first axis and a second axis is disposed across a plurality of channels along a third axis extending in a direction differing from the two axes (See e.g. [C19:L17 – 20], the input thereto may include an additional 150 (or fewer) inputs for motion vectors, which will not cause a large increase in the number of layers or the nodes of the neural network 112. [specific layer is configured such that a vector neuron]) (See e.g. [Claim 1], obtaining first motion information encoding motion of face pixels of a face in video data including: obtaining a first set of motion information indicating motion of the face pixels in a first direction from a first frame to a second frame of the video data [a plurality of channels]; obtaining a second set of motion information indicating motion of the face pixels [a plurality of channels] in a second direction from the first frame to the second frame, wherein the second direction is different from the first direction; and obtaining a third set of motion information indicating motion of the face pixels in a third direction from the first frame to the second frame, wherein the third direction is different from the first direction and the second direction; [along a third axis extending in a direction differing from the two axes]) the feature vector is a feature spectrum in which an activation value, corresponding to a vector length of an output vector of vector neuron, at a planar position of the specific layer is arrayed across the plurality of channels along the third axis. (See e.g. [C6:L65 – 1], a plurality of blocks or areas and motion vectors [corresponding to a vector length of an output vector of vector neuron] associated with at least some of the blocks that indicate motion of pixels in one or more frames proximate to the provided frame (e.g., an adjacent frame, a frame within three frames of the provided frame) (See e.g. [C7:L9 – 13], a set of motion information regarding motion in a first direction (e.g., horizontal direction), a set of motion information regarding motion in a second direction (e.g., vertical direction), and/or a set of motion information regarding motion in a third direction (e.g., diagonal direction, rotational direction about an axis). [at a planar position of the specific layer is arrayed across the plurality of channels along the third axis] Motion vectors may include a set of affine transformations that may describe the motion scaling of the block or macroblock.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and Rassool before them, to include Rassool’s vectors configured with channels across multiple axes which would allow Suzuki, Yang and Eiyama’s machine learning model to have specified layers for vector neurons. One would have been motivated to make such a combination in order to improve the confidence in the authentication of the provided data, as suggested by Rassool (US 11275928 B2) (C2:L24 – 26) Claim 6 is rejected under 35 U.S.C 103 as being unpatentable over Suzuki (US 20210280169 A1) in view of Yang (US 10387740 B2) further in view of Eiyama (US 20190260902 A1) further in view of Mosko (US 20220100477 A1) Regarding claim 6, Suzuki, Yang and Eiyama teach the method of claim 1, Suzuki teaches in a specific machine learning model of the N machine learning models (See e.g. [0059], the apparatus, such as the apparatus 400, prepares two or more machine learning [the N machine learning models] for performing classification by speaker.) Suzuki, Yang and Eiyama do not teach receiving an instruction indicating that one known class of the plurality of classes is set to a delete target class; in a [specific machine learning model of the N machine learning] models including the delete target class, changing an output name of the delete target class into a name indicating that the delete target class is deleted or unknown, or deleting one channel from an output layer of the [machine learning] model including the delete target class to restructure the [machine learning] model, and performing training of the restructured [machine learning] model. Mosko teaches receiving an instruction indicating that one known class of the plurality of classes is set to a delete target class; (See e.g. [0050], For example, if developer 106 modifies model 204, the modification may require manual changes to code 202. Under such circumstances, system 110 can display an annotation in model [receiving an instruction indicating] 204, indicating that a change in code 202 is pending.) [that one known class of the plurality of classes is set to a delete target class;] in a [specific machine learning model of the N machine learning] models including the delete target class, changing an output name of the delete target class into a name indicating that the delete target class is deleted or unknown, or deleting one channel from an output layer of the [machine learning] model including the delete target class to restructure the [machine learning] model, and performing training of the restructured [machine learning] model. (See e.g. [0050], For example, if developer 106 modifies model 204, the modification may require manual changes to code 202. Under such circumstances, system 110 can display an annotation in model 204, indicating that a change in code 202 is pending. Once developer 106 changes code 202 to correspond to the changes in model 204, system 110 may clear the annotation. It should be noted that if partial changes in code 202 reflect the critical changes in model 204, system 110 may clear the annotation. [changing an output name of the delete target class into a name indicating that the delete target class is deleted or unknown] Examples of changes in code 202 can include, but are not limited to, renaming a namespace, renaming a class, renaming a method, renaming or redefining a variable, redefining a namespace, and adding or removing a class. [models including the delete target class]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Suzuki, Yang, Eiyama and Mosko before them, to include Mosko’s removing classes from a model which would allow Suzuki, Yang and Eiyama’s machine learning model prune its size and reduce waste computation. One would have been motivated to make such a combination in order to allow for streamline modifying of the model by suggesting modifications and displaying them, as suggested by Mosko (US 20220100477 A1) (0003) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ALLMAN THOMPSON whose telephone number is (571)272-3671. The examiner can normally be reached Monday - Thursday, 6 a.m. - 3 p.m. ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.A.T./Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Aug 18, 2022
Application Filed
Sep 09, 2025
Non-Final Rejection — §101, §103
Dec 15, 2025
Response Filed
Feb 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547932
MACHINE LEARNING-ASSISTED MULTI-DOMAIN PLANNING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month