DETAILED ACTION
This non-final office action is responsive to applicant amendments & request for reconsideration submitted 27 Jan. 2026, and with subsequent RCE filed 18 Feb. 2026.
Claim status is currently pending for claims 1-2 and 4-11; claim 3 is canceled; amended claims are 1-2 and 11; claims of independent form are 1, 5 and 11; claims 5-7 previously indicated allowable.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/27/2026 has been entered.
Response to Remarks
Applicant’s responsive remarks filed 01/27/26 are considered on all issues as follows.
The rejection under 35 U.S.C. 112(a) as lacking written description is hereby withdrawn, as necessitated by applicant’s amendments made to the claims.
The rejection under 35 U.S.C. 101 eligibility is hereby withdrawn in view of amendments. Particularly, final limitation of amendment positively recites a printer sufficient for practical application integrated by additional elements. Examiner agrees with remarks 01/27/26 [P.9-10 of 14] pointing to the amendment in support of eligibility.
Applicant’s remarks regarding prior art combination is considered together with amendments. While the examiner does not concede regarding prior arts Ho and Lee given the breadth of claim, in the interest of advancing prosecution an updated search reveals additional prior art newly applied particularly being Ogawa of Canon. Ogawa’s technique uses convolutional neural networks with softmax similar to the instant application. In view of this, an updated rejection is made of record, detailed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over:
Lin et al., US PG Pub No 2020/0269573A1 hereinafter Lin, in view of
Yokouchi, Kenichi, US PG Pub No 2020/0314290A1 hereinafter Yokouchi, in view of
Ogawa et al., US PG Pub No 2021/0357674A1 Canon, evidenced by provisional 2020-085350 (attached, see PTO-892).
With respect to claim 1, Lin teaches:
A method for executing a discrimination process of a printing medium using a machine learning model, {Lin discloses methods, particularly Fig 7:750 “Classify Paper Type” where classification uses machine learning Fig 2A:218 for a print controller and printer Fig 1:160, introduced e.g. per [0006] “classify the print medium… based on machine learning”} the method comprising:
preparing N machine learning models where N is an integer of 2 or more, in which each of the N machine learning models is configured to discriminate a type of the printing medium {Lin discloses [0037,42-47] machine learning engine with algorithms that receive parameters and input features for classifying print media/papers. The classifying ML engine/algorithm is model which can be k-nearest neighbor or Random Forest [0039,42], note that random forest is an ensemble technique which means that N=plurality}
discriminating a type of the target printing medium {Lin Fig 7:730 [0027] “classifies paper according to one of three print medium categories: Plain (or Untreated); Ink Jet Coated, or Ink Jet Treated” similarly [0047] and/or [0042] “class y is predicted for the test sample x” y is target}
performing printing by a printer using the target printing medium according to print setting selected according to the type of the target printing medium {Lin Fig 1 printing system with printer 160 and printing medium introduced [0019], for [0047-49] “print jobs” comprises “select a category for the paper type” and “setting indicating the category …category selected during the medium classification is applied as a medium setting at the printing system” similarly at [0036] “machine learning engine 218, which searches and selects a category of paper” see also Fig 6}.
Lin suggests [0021] “module 190 (e.g., spectrodensitometer, spectrophotometer, etc.) is implemented as part of a medium classification system” but Lin does not disclose “reflectance” which is taught by Yokouchi:
by classifying input spectral data, which is a spectral reflectance of the printing medium, into any one of a plurality of classes {Yokouchi [0132-133] “classification using the neural network 60, the spectral reflectances 61(1) to 61(36) of the solid patch PA2 for the prediction target color are given to the input layer. Then, by performing forward propagation processing in the neural network 60… classified into respective sample colors are obtain by giving the spectral reflectances” shown Figs 22, 25:S130, and 2. See also Fig 8:300 Printer, and [0129] “classified into each sample color” each color is thus classified as plurality of classes, the color having spectral reflectance as described. In other words, wavelength range in 10-nm increments Fig 12, [0073]};
acquiring target spectral data which is a spectral reflectance of a target printing medium {Yokouchi Fig 12 “target” column Re is Reflectance i.e. spectral reflectance Fig 2-bottom right, implemented e.g. [0108-09] target spectral reflectance entails subtractive difference Re-Rs (s-sample), and/or [0116,142] variable y is target of the spectral reflectance. Further, [0135-136] “target color is selected in consideration of the characteristics of the base material (printing paper) used for printing” conveys a target printing medium/paper, e.g. “paper white patch”}; and
by executing a class classification process of the target spectral data using the N machine learning models {Yokouchi [0128] “In the classification stage, the spectral reflectances of the solid patch PA2 for the prediction target color are given to the learned neural network” again at [0133] “target color should be classified into respective sample colors are obtained by giving the spectral reflectances of the solid patch PA2 of the prediction target color as input to the neural network” Figs 2, 12 and 26 plotted y-axis}.
Yokouchi is directed to machine learning techniques for printing mediums thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to classify spectral reflectance per Yokouchi in combination with Lin’s classification of papers and spectrophotometer to arrive at the invention as claimed for a motivation “enhancing color expression” or “enable highly accurate prediction of a color… predicted at a lower cost and with fewer man-hours” (Yokouchi [0003], [0022]). Doing so would be beneficial to Lin for “optimize (e.g., improve) print quality” (Lin [0027]).
However, the combination Lin and Yokouchi does not disclose the following limitation which is disclosed by Ogawa:
each of the N machine learning models is configured to have a number of classes different from that of other machine learning models among the N machine learning models {Ogawa [0046-47] “plurality of neural networks… two neural networks” as N=2 or plurality of networks/models Figs 11-12 where [0049] “number of channels=number of classes required for classification… softmax function” describes convolutional networks for channel-wise classification of pixels with encoder and decoder, softmax classification is multi-class. Corresponding provisional support at [0026-30], Figs 11-12};
Ogawa is directed to inkjet printer devices with machine learning models thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to employ plurality neural networks for classification per Ogawa in combination to arrive at the invention as claimed for a motivation of “discriminating classes” [0005] and/or “to increase the estimation accuracy of the class probability values” [0050]. Corresponding provisional support comprises [0005], [0029].
With respect to claim 2, the combination of Lin, Yokouchi and Ogawa teaches the method according to claim 1, wherein the discriminating of the type of the target printing medium includes
discriminating a medium identifier indicating the type of the target printing medium according to a result of the class classification process {Lin [0046-48] “categorizing a print medium… paper type is classified as being associated with a particular category” via “algorithm to search and select a category for the paper type… suggested paper type” where algorithm uses [0042-43] “class y… assigns y the majority label” thus label, suggestion/recommendation, category may all be used as an identifier}, and
the method further comprises:
selecting the print setting according to the medium identifier {Lin discloses [0047-48] “select a category for the paper type …The setting provided represents a suggested paper type”}; and
However, Lin does not disclose “target spectral data” which is taught by Yokouchi:
class classification process of the target spectral data {Yokouchi [0128-29] “classification stage, the spectral reflectances of the solid patch PA2 for the prediction target color are given to the learned neural network… classification number” Fig 12 target column of spectral reflectance (Re), plotted Fig 26 y-axis, also [0116] or [0142] variable y}
Motivation for combination is applied equally as in claim 1.
With respect to claim 11, the rejection of claim 1 is incorporated. The difference in scope being a system comprising memory storing models and processor executing the process similar to method claim 1. Lin Fig 8 shows a computer system with memory and processor described [0050-54]. The memory stores executable instructions for processing the processes described to include machine learning [0037,42-47] or shown Fig 2A. The remainder of this claim is rejected for the same rationale as claim 1.
Claim 4 is rejected under 35 U.S.C. 103 as unpatentable over Lin, Yokouchi and Ogawa in view of: Chen et al., US PG Pub No 2013/0129143A1 hereinafter Chen (Epson).
With respect to claim 4, the combination Lin, Yokouchi and Ogawa teaches the method according to claim 1. Chen teaches wherein,
learning of the N machine learning models is performed using corresponding N training data groups {Chen [0056-54] “cluster the training set” “training data is randomly partitioned into M subsets” are groups of training data employed Alg.2 [0054] with “local classifiers F’t on each cluster” clustering models include SVM support vector machines [0084]}, and
N spectral data groups constituting the N training data groups are in a state equivalent to a state in which the N spectral data groups are grouped into N groups by a clustering process {Chen see Figs 1:115 and 3:320 “spectral clustering” in training loop for classifiers local/global, Alg.1 [0051] details spectral clustering, Alg.1 being referenced in Alg.2 [0054] with defined training set, clusters are groups and state is subject to parameterization e.g. weighted average [0057], Fig 3:335}.
Chen is directed to model training with spectral data thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to specify spectral clustering per Chen to support Lee in combination to arrive at the invention as claimed for the motivation “spectral clustering algorithm helps effectively avoid exhaustive search for optimal model complexity” and entails “fast approximation of spectral clustering preferably may be applied” (Chen [0054-53]) with further benefits “three advantages… automatically adjust the model complexity according to the distribution of training data; 3) the approach of local adaptation from global classifier avoids the common under-training problem” (Chen [0038]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Lin, Yokouchi and Ogawa in view of Ferreira et al., “Data-Driven Feature Characterization Techniques for Laser Printer Attribution” hereinafter Ferreira, and further in view of Saragadam et al., “Programmable Spectrometry—Per-pixel Classification of Materials using Learned Spectral Filters” hereinafter Saragadam.
With respect to claim 8, the combination Lin, Yokouchi and Ogawa teaches the method according to claim 1, further comprising:
a medium exclusion step of excluding one printing medium to be excluded from the object to be subjected to the class classification process {Lin [0036] “machine learning engine 218, which searches and selects a category of the paper” selecting logically excludes that which is not selected, e.g. corollary of choosing heads in a coin-toss is effectively excluding tails. See [0047,49] as category is class of the classification algorithm}
However, Lin in combination does not fairly disclose model selection which is taught by Ferreira:
by one target machine learning model selected from the N machine learning models {Ferreira Fig 8 shows multi-classifier majority voting, described e.g. [P.1868 Last2¶, ¶3] “selected CNNs… The model generated at the epoch with the smallest validation loss is selected as the best candidate for each CNN” teaches model selection, target model is best CNN w/ smallest loss},
Ferreira is directed to neural networks for printers thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to select model per Ferreira in combination for the motivation of using the best model having smallest loss or highest accuracy performance (Ferreira [P.1868 Last2¶], [P.1869 Last¶]). Additionally, “third motivation comes from that fact that by using a discriminative classifier at the end of the DNN-based feature extraction, we could simplify the fusion of different methods, thus creating a lightweight integrated solution” (Ferreira [P.1866 ¶3]).
However, Ferreira in combination does not disclose the following limitations which are taught by Saragadam:
wherein the medium exclusion step includes
a step (i) of updating the training data group by deleting spectral data about the printing medium to be excluded from a training data group used for learning of the target machine learning model {Saragadam [P.3-4] Figs 2,4 show spectral filters for classifier, [P.2] Eq.3 introduces “set of spectral filters” for [P.7 ¶2] “Training classifiers… learned spectral filters” where the set is group, filtering is deleting, learning is update and classifiers classify materials to include “printed paper” Figs 13,8. Classifiers include a dnn deep neural network and svm support vector machine, model selection (target) is [P.9 Sect.B] “we picked the model with the best accuracy on validation”}, and
a step (ii) of performing relearning on the target machine learning model using the updated training data group {Saragadam [P.9 Sect.B] “training our neural network… trained for a total of 60 epochs” epochs of training is retraining/relearning, the model as target is selected based on best accuracy validation as above, classification network with learned filters see Fig 4}.
Saragadam is directed to machine learning models for classifying materials by spectral data thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to use spectral filters for training and relearning per Saragadam in combination to arrive at the invention as claimed to address the issue “we pursue two questions: one, how many filters are required for classifying K classes, and two, what spectral filters maximize classification accuracy” (Saragadam [P.4 ¶4]).
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Lee et al., US PG Pub No 2021/0223031A1 Fig 6 shows different classes among N models
Ho et al., “A multi-one-class dynamic classifier for adaptive digitization of document streams” Fig 1 shows ensemble of one-class classifiers over color e.g. RGB
Eiyama et al., US PG Pub No 2019/0260902A1 Canon, paper classifier spectral Figs 6, 11B and [0046] “target printing medium”, [0006] “discriminating the type of printing medium” also Canon-Igarashi US 2020/0314276A1 Figs 5, 7, 11 paper type selection, classified [0036], specular reflection (specular not spectral, substitutes wavelength for angular diffusion)
Morovic et al., US PG Pub No 2022/0131998A1 HP, Fig 3 training, spectral reflectance 38-40
Yokouchi et al., US PG Pub No 2022/0092369A1 Japan provisional, similar to relied upon
***Examiner notes the term “class” is effectively unsearchable in patent database because every patent document has a classification (i.e. uspc, cpc).
Allowable Subject Matter
Claims 5-7 are allowed. Previously indicated allowable subject matter of claim 5 was rewritten in independent form similar to claim 1, with claims 6-7 depending from claim 5.
Claims 9-10 are objected to as being dependent upon a rejected base claim, but otherwise distinguish from prior art of record to advance prosecution if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and further must overcome any and all other objections and rejections made of record.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Chase P Hinckley whose telephone number is (571)272-7935. The examiner can normally be reached M-F 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda M. Huang can be reached at 571-270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHASE P. HINCKLEY/Examiner, Art Unit 2124