Prosecution Insights
Last updated: April 19, 2026
Application No. 17/680,928

METHOD FOR DISCRIMINATING CLASS OF DATA TO BE DISCRIMINATED USING MACHINE LEARNING MODEL, INFORMATION PROCESSING DEVICE, AND COMPUTER PROGRAM

Non-Final OA §103§112
Filed
Feb 25, 2022
Examiner
SUSSMAN MOSS, JACOB ZACHARY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Seiko Epson Corporation
OA Round
3 (Non-Final)
14%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
-6%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
1 granted / 7 resolved
-40.7% vs TC avg
Minimal -20% lift
Without
With
+-20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
37.3%
-2.7% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the Request for Continued Examination filed January 28th, 2026, in which claims 1, 6, and 8-10 have been amended. No claims have been cancelled nor added. The amendments have been entered, and claims 1-10 are currently pending in the case. Claims 1, 8, and 9 are independent claims. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 28th, 2026 has been entered. Claim Objections Claim 8 is objected to because of the following informalities: “a processor configured to perform calculation using the machine learning model…” should be “a processor configured to perform calculations using the machine learning model…”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation “(a1) read, from the memory”. There is insufficient antecedent basis for this limitation in the claim. For examination purposes this limitation has been interpreted as “(a1) read, from the non-transitory computer-readable storage medium”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Waldo (US 10,140,515 B1) in view of Nakakimura et al. (US 2017/0356889 A1) (hereinafter “Nakakimura”) in further view of CONTEXT (“CONTEXT Version 4: Neural Networks for Text Categorization”, 18 July 2017). Regarding claim 1: Waldo teaches [a] method for discriminating a class of an object… a vector neural network type machine learning model (Waldo, col. 14, lines 50-52, “The classifier can be trained using a convolutional neural network (CNN), or other examples of machine learning as described above”), the machine learning model including, from an input data side: a convolutional layer that receives the input data (Waldo, col 12, lines 8-18, “In each convolutional layer, the convolutional network uses a shared weight, and each layer will compute the output of neurons that are connected to local regions (i.e., receptive fields) in the input, where each neuron computes a dot product between their weights and the region (i.e., receptive field) they are connected to in the input. In this way, each neuron looks at a specific region (i.e., receptive field) of the image and outputs one number: the dot product between its weights and the pixel values of in its region (i.e., receptive field).”), a plurality of vector neuron layers that are consecutively arranged to receive a vector input from a preceding layer and output a vector output to a subsequent layer (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), and a classification vector neuron layer that receives a vector input from a last layer of the plurality of vector neuron layers and outputs a classification result of the input data (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), the method performed by one or more processors and comprising (Waldo, Claim 13 “at least one processor;”): (a1) obtaining from a non-transitory computer readable medium for each class of one or more classes discriminable by the machine learning model, a known feature… (Waldo, col 8, lines 28-38 “FIG. 8 illustrates an example system 800 for classification of image data (e.g., identifying regions of interest and objects of interest within the regions of interest, scene recognition, applying image descriptors appropriate for the region of interest, etc.) in accordance with an embodiment. It should be understood that classification of image data includes, for example, recognizing items represented in image data, determining a region or portion of the image that includes the representation of the item(s) (e.g., a “region of interest”), and generating a label that includes a descriptor and/or category for the items and/or regions recognized.”) obtained based on an output of a specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model (Waldo, col 17, lines 5-8 “A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”) when a plurality of pieces of training data are input to the convolutional layer of the machine learning model (Waldo, col 15, lines 10-18 “CNN is trained on a similar data set (which includes people, faces, cars, boats, airplanes, buildings, landscapes, fruits, vases, birds, animals, furniture, clothing etc.), so it learns the best feature representation of a desired object represented for this type of image. The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); (b) executing a class discrimination processing of the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”) by inputting the…data of the object as the input data to the convolutional layer of the machine learning model (Waldo, col 17, lines 44-47 “The query image can also be analyzed using the CNN 922 to extract a feature vector from the network before the classification layer.”), wherein: the (b) includes: (b0) obtaining a class discrimination result, which is an output from the classification vector neuron layer upon inputting the… data of the object to the convolutional layer of the machine learning model, the class discrimination result indicating a class amount the one or more classes determined for the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”), (b1) calculating a feature….of an output of the specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model when the…data of the object is input to the convolutional layer of the machine learning model (Waldo, col 15, lines 15-18 “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); class discrimination result…output from the convolutional layer of the machine learning model…(Waldo, col 15, lines 5-10 “The bottom layer of the convolution layer along with a lower layer and an output layer make up the fully connected portion of the network. From the input layer, a number of output values can be determined from the output layer, which can include several items determined to be related to an input item, among other such options.”) Waldo does not teach “…using a spectrometer…; …spectrum group; (a2) measuring the object with the spectrometer and obtaining spectral data of the object from the spectrometer; …spectral data… …spectral data… …spectral data… …calculating a feature spectrum; (b2) calculating a similarity between the feature spectrum and the known feature spectrum group; …based on the calculated similarity between the feature spectrum and the known feature spectrum;” However, Nakakimura teaches …using a spectrometer… (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) …spectrum group (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) (a2) measuring the object with the spectrometer and obtaining spectral data of the object from the spectrometer (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) and …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …calculating a feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) (b2) calculating a similarity between the feature spectrum and the known feature spectrum group (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) …based on the calculated similarity between the feature spectrum and the known feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”); Waldo and Nakakimura are analogous art because both references concern methods for data processing. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Waldo’s image analysis method to incorporate the spectrums taught by Nakakimura. The motivation for doing so would have been to improve the throughput of the whole difference analysis, as stated in Nakakimura, ¶38 “Further, in the three-dimensional spectral data processing device and processing method according to the present invention, since a second parameter such as a retention time, etc., is not taken into account when determining the characteristic spectrum, no alignment processing is required for aligning the retention time among a plurality of samples which are normally required when obtaining a two-dimensional characteristic data table including characteristic data for a plurality of samples, and the time and effort required for such processing can be saved. As a result, the throughput of the whole difference analysis can be improved.” Waldo in view of Nakakimura does not teach “(b3) creating an explanatory text for the class discrimination result… (b4) outputting, for display on a user device, the class discrimination result with the explanatory text indicating a reason for the class determined for the object.” However, CONTEXT teaches (b3) creating an explanatory text for the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The prediction values in text format can be considered an explanatory text)… (b4) outputting, for display on a user device, the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The text output of CONTEXT can be considered outputting the explanatory text and as a data file will be read on a user device) with the explanatory text indicating a reason for the class determined for the object (CONTEXT, page 16, section 3.2 “reNet predictapplies a model saved during training to new data and write prediction values to a file.” Here, the prediction values can be considered a reason). Waldo in view of Nakakimura and CONTEXT are analogous art because both references concern classification using convolution neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the text output of CONTEXT to the teachings of Waldo in view of Nakakimura. The motivation for doing so would have been to have a structured method of outputting the discrimination result, as stated in Waldo, col 22, lines 41-48, “The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example.”. Regarding claim 7: Waldo in view of Nakakimura in view of CONTEXT teaches [t]he method according to claim 1, wherein the (b4) includes: displaying a discrimination result list in which the class discrimination result and the explanatory text are arranged for two or more classes among a plurality of classes that are discriminable by the machine learning model (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line. However, this could be inefficient in space/time when the number of data points or the number of classes is large. PNG media_image1.png 226 651 media_image1.png Greyscale ” here, the WriteText format of the prediction can be considered the explanatory text for the classes discriminated). It would have been obvious to combine the teachings of Waldo in view of Ma and CONTEXT for the reasons set forth in connection with claim 1 above. Regarding claim 8: Waldo teaches ….an information processing device that executes a class discrimination processing of discriminating a class of an object using…a vector neural network type machine learning model, (Waldo, col. 14, lines 50-52, “The classifier can be trained using a convolutional neural network (CNN), or other examples of machine learning as described above”), the machine learning model including, from an input data side: a convolutional layer that receives the input data (Waldo, col 12, lines 8-18, “In each convolutional layer, the convolutional network uses a shared weight, and each layer will compute the output of neurons that are connected to local regions (i.e., receptive fields) in the input, where each neuron computes a dot product between their weights and the region (i.e., receptive field) they are connected to in the input. In this way, each neuron looks at a specific region (i.e., receptive field) of the image and outputs one number: the dot product between its weights and the pixel values of in its region (i.e., receptive field).”), a plurality of vector neuron layers that are consecutively arranged to receive a vector input from a preceding layer and output a vector output to a subsequent layer (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), and a classification vector neuron layer that receives a vector input from a last layer of the plurality of vector neuron layers and outputs a classification result of the input data (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), the information processing device comprising: a memory configured to store the machine learning model; and a processor configured to perform calculation using the machine learning model, wherein the processor is configured to execute(Waldo, Claim 13 “at least one processor; and memory including instructions that, when executed by the at least one processor, enable the system to:”): (a1) obtaining from a non-transitory computer readable medium for each class of one or more classes discriminable by the machine learning model, a known feature… (Waldo, col 8, lines 28-38 “FIG. 8 illustrates an example system 800 for classification of image data (e.g., identifying regions of interest and objects of interest within the regions of interest, scene recognition, applying image descriptors appropriate for the region of interest, etc.) in accordance with an embodiment. It should be understood that classification of image data includes, for example, recognizing items represented in image data, determining a region or portion of the image that includes the representation of the item(s) (e.g., a “region of interest”), and generating a label that includes a descriptor and/or category for the items and/or regions recognized.”) obtained based on an output of a specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model (Waldo, col 17, lines 5-8 “A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”) when a plurality of pieces of training data are input to the convolutional layer of the machine learning model (Waldo, col 15, lines 10-18 “CNN is trained on a similar data set (which includes people, faces, cars, boats, airplanes, buildings, landscapes, fruits, vases, birds, animals, furniture, clothing etc.), so it learns the best feature representation of a desired object represented for this type of image. The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); (b) executing the class discrimination processing of the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”) by inputting the…data of the object as the input data to the convolutional layer of the machine learning model (Waldo, col 17, lines 44-47 “The query image can also be analyzed using the CNN 922 to extract a feature vector from the network before the classification layer.”), wherein: the (b) includes: (b0) obtaining a class discrimination result, which is an output from the classification vector neuron layer upon inputting the… data of the object to the convolutional layer of the machine learning model, the class discrimination result indicating a class amount the one or more classes determined for the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”), (b1) calculating a feature….of an output of the specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model when the…data of the object is input to the convolutional layer of the machine learning model (Waldo, col 15, lines 15-18 “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); class discrimination result…output from the convolutional layer of the machine learning model…(Waldo, col 15, lines 5-10 “The bottom layer of the convolution layer along with a lower layer and an output layer make up the fully connected portion of the network. From the input layer, a number of output values can be determined from the output layer, which can include several items determined to be related to an input item, among other such options.”) Waldo does not teach “A spectrometer system including a spectrometer… …using a spectrometer…; …spectrum group; (a2) measuring the object with the spectrometer and obtaining spectral data of the object from the spectrometer; …spectral data… …spectral data… …spectral data… …calculating a feature spectrum; (b2) calculating a similarity between the feature spectrum and the known feature spectrum group; …based on the calculated similarity between the feature spectrum and the known feature spectrum;” However, Nakakimura teaches A spectrometer system including a spectrometer… (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) …using a spectrometer… (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) …spectrum group (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) (a2) measuring the object with the spectrometer and obtaining spectral data of the object from the spectrometer (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) and …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …calculating a feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) (b2) calculating a similarity between the feature spectrum and the known feature spectrum group (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) …based on the calculated similarity between the feature spectrum and the known feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”); Waldo and Nakakimura are analogous art because both references concern methods for data processing. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Waldo’s image analysis method to incorporate the spectrums taught by Nakakimura. The motivation for doing so would have been to improve the throughput of the whole difference analysis, as stated in Nakakimura, ¶38 “Further, in the three-dimensional spectral data processing device and processing method according to the present invention, since a second parameter such as a retention time, etc., is not taken into account when determining the characteristic spectrum, no alignment processing is required for aligning the retention time among a plurality of samples which are normally required when obtaining a two-dimensional characteristic data table including characteristic data for a plurality of samples, and the time and effort required for such processing can be saved. As a result, the throughput of the whole difference analysis can be improved.” Waldo in view of Nakakimura does not teach “(b3) creating an explanatory text for the class discrimination result… (b4) outputting, for display on a user device, the class discrimination result with the explanatory text indicating a reason for the class determined for the object.” However, CONTEXT teaches (b3) creating an explanatory text for the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The prediction values in text format can be considered an explanatory text)… (b4) outputting, for display on a user device, the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The text output of CONTEXT can be considered outputting the explanatory text and as a data file will be read on a user device) with the explanatory text indicating a reason for the class determined for the object (CONTEXT, page 16, section 3.2 “reNet predictapplies a model saved during training to new data and write prediction values to a file.” Here, the prediction values can be considered a reason). Waldo in view of Nakakimura and CONTEXT are analogous art because both references concern classification using convolution neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the text output of CONTEXT to the teachings of Waldo in view of Nakakimura. The motivation for doing so would have been to have a structured method of outputting the discrimination result, as stated in Waldo, col 22, lines 41-48, “The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example.”. Regarding claim 9: Waldo teaches [a] non-transitory computer-readable storage medium storing a computer program causing a processor (Waldo, col 20, lines 55-61 “As would be apparent to one of ordinary skill in the art, the device can include many types of memory, data storage, or non-transitory computer-readable storage media, such as a first data storage for program instructions for execution by the processor 1102, a separate storage for images or data, a removable memory for sharing information with other devices, etc.”) to execute a class discrimination processing of discriminating a class of an object using the spectrometer and a vector neural network type machine learning model (Waldo, col. 14, lines 50-52, “The classifier can be trained using a convolutional neural network (CNN), or other examples of machine learning as described above”), the machine learning model including, from an input data side: a convolutional layer that receives the input data (Waldo, col 12, lines 8-18, “In each convolutional layer, the convolutional network uses a shared weight, and each layer will compute the output of neurons that are connected to local regions (i.e., receptive fields) in the input, where each neuron computes a dot product between their weights and the region (i.e., receptive field) they are connected to in the input. In this way, each neuron looks at a specific region (i.e., receptive field) of the image and outputs one number: the dot product between its weights and the pixel values of in its region (i.e., receptive field).”), a plurality of vector neuron layers that are consecutively arranged to receive a vector input from a preceding layer and output a vector output to a subsequent layer (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), and a classification vector neuron layer that receives a vector input from a last layer of the plurality of vector neuron layers and outputs a classification result of the input data (Waldo, col 17, lines 4-8 “As further described, CNNs include several learning layers in their architecture. A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”), the method performed by one or more processors and comprising (Waldo, Claim 13 “at least one processor;”): (a1) read, from the memory, a known feature… (Waldo, col 8, lines 28-38 “FIG. 8 illustrates an example system 800 for classification of image data (e.g., identifying regions of interest and objects of interest within the regions of interest, scene recognition, applying image descriptors appropriate for the region of interest, etc.) in accordance with an embodiment. It should be understood that classification of image data includes, for example, recognizing items represented in image data, determining a region or portion of the image that includes the representation of the item(s) (e.g., a “region of interest”), and generating a label that includes a descriptor and/or category for the items and/or regions recognized.”) obtained based on an output of a specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model (Waldo, col 17, lines 5-8 “A query image from the training data set is analyzed using the CNN to extract a feature vector from the network before the classification layer.”) when a plurality of pieces of training data are input to the convolutional layer of the machine learning model (Waldo, col 15, lines 10-18 “CNN is trained on a similar data set (which includes people, faces, cars, boats, airplanes, buildings, landscapes, fruits, vases, birds, animals, furniture, clothing etc.), so it learns the best feature representation of a desired object represented for this type of image. The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); (b) execute a class discrimination processing of the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”) by inputting the…data of the object as the input data to the convolutional layer of the machine learning model (Waldo, col 17, lines 44-47 “The query image can also be analyzed using the CNN 922 to extract a feature vector from the network before the classification layer.”), wherein: the (b) includes: (b0) obtain a class discrimination result, which is an output from the classification vector neuron layer upon inputting the… data of the object to the convolutional layer of the machine learning model, the class discrimination result indicating a class amount the one or more classes determined for the object (Waldo, col 9, lines 37-42 “The classification module 808, for example, can be configured to analyze the patches proposed by the region proposal module 806 and can generate a classification vector or other categorization value that indicates the probability that a respective patch includes an instance of a certain category and/or descriptor.”), (b1) calculate a feature….of an output of the specific layer among the plurality of vector neuron layers arranged between the convolutional layer and the classification vector neuron layer of the machine learning model when the…data of the object is input to the convolutional layer of the machine learning model (Waldo, col 15, lines 15-18 “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image.”); class discrimination result…output from the convolutional layer of the machine learning model…(Waldo, col 15, lines 5-10 “The bottom layer of the convolution layer along with a lower layer and an output layer make up the fully connected portion of the network. From the input layer, a number of output values can be determined from the output layer, which can include several items determined to be related to an input item, among other such options.”) Waldo does not teach “…using a spectrometer…; …spectrum group; (a2) measure the object with the spectrometer and obtaining spectral data of the object from the spectrometer; …spectral data… …spectral data… …spectral data… …calculating a feature spectrum; (b2) calculate a similarity between the feature spectrum and the known feature spectrum group; …based on the calculated similarity between the feature spectrum and the known feature spectrum;” However, Nakakimura teaches …using a spectrometer… (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) …spectrum group (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) (a2) measure the object with the spectrometer and obtaining spectral data of the object from the spectrometer (Nakakimura, ¶3 “The present invention is preferably used to process three-dimensional spectral data obtained by, for example, a Liquid Chromatograph Mass Spectrometer (LC-MS), a Gas Chromatograph Mass Spectrometer (GC-MS), a liquid chromatograph using a multichannel type detector such as, e.g., a photodiode array (PDA) detector, a liquid chromatograph or a gas chromatograph using an ultraviolet-visible spectrophotometer or an infrared spectrophotometer capable of wavelength scanning as a detector, or an imaging mass spectrometer, etc.”) and …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …spectral data… (Nakakimura, ¶20 “a) a characteristic spectrum acquisition unit configured to perform multivariate analysis by considering a plurality of spectrums constituting a single three-dimensional spectral data obtained from a specific sample among a plurality of samples as a collection of a single spectrum not depending on a value of the second parameter, and based on a result of the multivariate analysis, one or a plurality of characteristic spectrums that characterize the specific sample is obtained;”) …calculating a feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) (b2) calculate a similarity between the feature spectrum and the known feature spectrum group (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”) …based on the calculated similarity between the feature spectrum and the known feature spectrum (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum.”); Waldo and Nakakimura are analogous art because both references concern methods for data processing. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Waldo’s image analysis method to incorporate the spectrums taught by Nakakimura. The motivation for doing so would have been to improve the throughput of the whole difference analysis, as stated in Nakakimura, ¶38 “Further, in the three-dimensional spectral data processing device and processing method according to the present invention, since a second parameter such as a retention time, etc., is not taken into account when determining the characteristic spectrum, no alignment processing is required for aligning the retention time among a plurality of samples which are normally required when obtaining a two-dimensional characteristic data table including characteristic data for a plurality of samples, and the time and effort required for such processing can be saved. As a result, the throughput of the whole difference analysis can be improved.” Waldo in view of Nakakimura does not teach “(b3) creating an explanatory text for the class discrimination result… (b4) output, for display on a user device, the class discrimination result with the explanatory text indicating a reason for the class determined for the object.” However, CONTEXT teaches (b3) create an explanatory text for the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The prediction values in text format can be considered an explanatory text)… (b4) output, for display on a user device, the class discrimination result (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The text output of CONTEXT can be considered outputting the explanatory text and as a data file will be read on a user device) with the explanatory text indicating a reason for the class determined for the object (CONTEXT, page 16, section 3.2 “reNet predictapplies a model saved during training to new data and write prediction values to a file.” Here, the prediction values can be considered a reason). Waldo in view of Nakakimura and CONTEXT are analogous art because both references concern classification using convolution neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the text output of CONTEXT to the teachings of Waldo in view of Nakakimura. The motivation for doing so would have been to have a structured method of outputting the discrimination result, as stated in Waldo, col 22, lines 41-48, “The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example.”. Regarding claim 10: Waldo in view of Nakakimura in view of CONTEXT teaches [t]he method according to claim 1, wherein the at least one component is at least one preset wavelength band of the spectral reflectance data (Nakakimura, ¶3 “ Further, in a liquid chromatograph using a PDA detector as a detector, it is possible to obtain an absorption spectrum indicating a relationship between a wave number, a wavelength, etc., and a signal intensity (absorbance) from moment to moment.”). It would have been obvious to combine the teachings of Waldo in view of Nakakimura in view of CONTEXT for the reasons set forth in connection with claim 1 above. Claims 2, 3 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Waldo in view of Nakakimura in view of CONTEXT in further view of Ma et al. (“Fine-Grained Vehicle Classification With Channel Max Pooling Modified CNNs”, Ma et al., 4 April 2019) (hereinafter “Ma”). Regarding claim 2: Waldo in view of Nakakimura in further view of CONTEXT teaches [t]he method according to claim 1, Waldo in view of Nakakimura in further view of CONTEXT does not teach “wherein the specific layer has a configuration in which vector neurons arranged on a plane defined by two axes including a first axis and a second axis are arranged as a plurality of channels along a third axis that is in a direction different from those of the two axes, in the specific layer, when a region which is specified by a plane position defined by a position in the first axis and a position in the second axis and which includes the plurality of channels along the third axis is referred to as a partial region, for each partial region of a plurality of partial regions included in the specific layer, the feature spectrum is obtained as any one of: (i) a feature spectrum of a first type in which a plurality of element values of an output vector of each of the vector neurons included in the partial region are arranged over the plurality of channels along the third axis” However, Ma teaches wherein the specific layer has a configuration in which vector neurons arranged on a plane defined by two axes including a first axis and a second axis are arranged as a plurality of channels along a third axis that is in a direction different from those of the two axes (Ma, section III. B, ¶2, “Similar as the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ RC×M×N and C ∈ Rc×M×N, respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively”), in the specific layer, when a region which is specified by a plane position defined by a position in the first axis and a position in the second axis and which includes the plurality of channels along the third axis (Ma, Fig. 2, lower part of (a) PNG media_image2.png 539 617 media_image2.png Greyscale Here, a partial region is shown on each plane and which is defined along a first and second axis and is repeated along a third axis.) is referred to as a partial region, for each partial region of a plurality of partial regions included in the specific layer, the feature spectrum is obtained as any one of: (i) a feature spectrum of a first type in which a plurality of element values of an output vector of each of the vector neurons included in the partial region are arranged over the plurality of channels along the third axis (Ma, Section III. B, ¶1 “A CMP operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gathering together within less channels, which is important for fine-grained image classification that needs more discriminative features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller, before it connects to the first fully connected (FC) layer. To this end, we propose the CMP operation, which is illustrated in Fig. 2.”), It is noted the claim recites alternative language, and Ma teaches at least one of the alternatives. Waldo in view of Nakakimura in further view of CONTEXT and Ma are analogous art because both references concern classification using convolution neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the cross-channel analysis of Ma to the teachings of Waldo/Nakakimura/CONTEXT. The motivation for doing so would have been to improve classification accuracy, as stated in Ma, abstract, “Experimental results on two fine-grained vehicle datasets demonstrate that the CMP modified CNNs can improve the classification accuracies on the task of fine-grained vehicle classification while a massive amount of parameters are reduced.”. Regarding claim 3: Waldo in view of Nakakimura in view of CONTEXT in further view of Ma teaches [t]he method according to claim 2, wherein the similarity obtained in the (b2) is a local similarity obtained for each of the partial regions (Nakakimura, ¶30 “Normally, since a plurality of characteristic spectrums are obtained, the spectrum similarity calculation unit calculates the similarity between each spectrum at each measurement time extracted from the three-dimensional spectral data to a single sample and a characteristic spectrum for each of three-dimensional spectral data with respect to a plurality of samples for each characteristic spectrum. Therefore, in one sample, the similarity for a single characteristic spectrum is obtained by the number of spectrums. Therefore, from the plurality of similarities, a representative value of similarity related to a single characteristic spectrum is calculated in one sample.”). Regarding claim 6: Waldo in view of Nakakimura in view of CONTEXT in further view of Ma teaches [t]he method according to claim 3, wherein the local similarity for each of the partial regions is calculated as any one of: a local similarity of a first type which is a similarity between the feature spectrum obtained based on the output of the partial region of the specific layer according to the spectral data of the object and all of the known feature spectra (Ma, Section III. B, ¶1, “A CMP operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gathering together within less channels, which is important for fine-grained image classification that needs more discriminative features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller, before it connects to the first fully connected (FC) layer. To this end, we propose the CMP operation, which is illustrated in Fig. 2.”) associated with the specific layer and each class of the one or more classes (Waldo, col. 16, lines 9-19 “Embodiments of the present invention can use a classification score (in some embodiments, a ‘similarity score’) generated by the classification layer of the CNN to generate a local feature weight and an object recognition weight. The classification score generated by the CNN indicates how close the object in the query image (e.g., an object in a region of interest such as a tree, etc.) is to an object the CNN has been trained to identify. As such, high scores correspond to a high likelihood that the object in the query image is one or more specific objects, whereas low scores indicate that the object in the query image is likely not an object or is an object that the CNN has not been trained to identify.”); It is noted the claim recites alternative language, and Waldo in view of Nakakimura in view of CONTEXT in further view of Ma teaches at least one of the alternatives. It would have been obvious to combine the teachings of Waldo in view of Nakakimura in view of CONTEXT and Ma for the reasons set forth in connection with claim 2 above. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Waldo in view of Nakakimura in view of CONTEXT in view of Ma in further view of Aguilera et al. (“Learning cross-spectral similarity measures with deep convolutional neural networks”, Aguilera et al., 2016) (hereinafter “Aguilera”). Regarding claim 4: Waldo in view of Nakakimura in view of CONTEXT in view of Ma teaches [t]he method according to claim 3, wherein when Ns and Nd are integers of 2 or more, Nd ≤ Ns, and Nc is an integer of 1 or more (The broadest reasonable interpretation of Ns, Nd and Nc includes when all three are the same integer, 2 or greater. Wherein the number of pieces of Nd input data equals the number of Ns local similarities of the partial regions which equals Nc the number of explanations), the (b3) includes: obtaining Nc character strings output from a character string lookup table prepared in advance by inputting the Nd pieces of table input data into the character string lookup table (CONTEXT, page 8, section 2.2 “Label dictionary file (input) The labels used in the label file above must be declared in a label dictionary file. The label dictionary file should contain one label per line. See examples/data/s-cat.dic for example.” The Label dictionary file which contains one label per line can be considered the character lookup table. The inputting of the Nd input data returns Nc character strings in the form of labels); and creating the explanatory text by applying the Nc character strings to an explanatory text template including Nc character string frames (CONTEXT, page 16, section 3.2 “Prediction file (text output) Optionally, predict writes prediction values in the text format, one data point per line.” The Nc character strings are the labels and the text output of the prediction values in a specific format can be considered a template form). Waldo in view of Nakakimura in view of CONTEXT in view of Ma does not teach “creating Nd pieces of table input data, in which a number of gradations thereof is smaller than that of the local similarity, based on Ns local similarities for at least Ns partial regions which is a part of the plurality of partial regions included in the specific layer” However, Aguilera teaches creating Nd pieces of table input data, in which a number of gradations thereof is smaller than that of the local similarity, based on Ns local similarities for at least Ns partial regions which is a part of the plurality of partial regions included in the specific layer (Aguilera, page 4, section 3.2, ¶1, “Essentially, siamese networks are quite similar to traditional feature matching approaches, i.e., the network firstly computes feature descriptors for each patch and then evaluates the similarity between the descriptions using some trained metric.” The number of similarities is the Ns local similarities for each patch being the Ns partial regions and the evaluated similarity being the Nd pieces of table input data); Waldo in view of Nakakimura and CONTEXT are analogous art because both references concern classification using convolution neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the text output and dictionary file of CONTEXT to the teachings of Waldo in view of Nakakimura. The motivation for doing so would have been to have a structured method of outputting the discrimination result, as stated in Waldo, col 22, lines 41-48, “The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example.”. Waldo in view of Nakakimura in view of CONTEXT in further view of Ma and Aguilera are analogous art because both references concern similarity using convolutional neural networks. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Waldo/CONTEXT/Nakakimura/Ma to incorporate the local similarity taught by Aguilera. The motivation for doing so would have been to better process spectral data, as stated in Aguilera, section 6, ¶11, “Our results show that using CNNs to determine the similarity between two patches from different spectra is feasible, and more important it outperforms other alternatives.”. Regarding claim 5: Waldo in view of Nakakimura in view of CONTEXT in further view of Aguilera teaches [t]he method according to claim 4, wherein the integer Nd is smaller than the integer Ns, and the creating Nd pieces includes: obtaining Nd representative similarities by grouping the Ns local similarities into Nd groups and obtaining a representative value of the local similarities of each of the groups (Aguilera, section 3, paragraph 1 “Each one of these networks takes as input two image patches of size 64x64, where each patch belongs to a different spectra. The output is a scalar value that indicates the distance between the input patches.” Here, the two image patches for each network are the Nd groups. And the number of patches are the Ns local similarities. The distance between patches is the representative value of the local similarities of the groups meaning there are Nd representative similarities); and creating the Nd pieces of table input data by reducing the number of gradations of the Nd representative similarities (Aguilera, page 4, section 3.2, ¶1, “Essentially, siamese networks are quite similar to traditional feature matching approaches, i.e., the network firstly computes feature descriptors for each patch and then evaluates the similarity between the descriptions using some trained metric.” The evaluated similarity is the Nd pieces of table input data, and this is a reduced gradation of the distance which is the Nd representative similarities). It would have been obvious to combine the teachings of Waldo in view of Nakakimura in view of CONTEXT in further view of Ma and Aguilera for the reasons set forth in connection with claim 4 above. Response to Arguments Applicant's arguments filed January 28th, 2026 have been fully considered but they are not persuasive. Regarding the rejection of claims as judicial exceptions to 35 U.S.C. 101, amendments to claims have overcome the previous rejections, which are withdrawn. Regarding the rejection of claims under 35 U.S.C. 103, Applicant's arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen et al. (“Similarity-based Classification: Concepts and Algorithms”, Chen et al., March 2009) discloses the generalizability of using similarities as features is, design goals and methods for weighting nearest-neighbors for similarity-based learning, and different methods for consistently converting similarities into kernels. Hu et al. (“Deep Convolutional Neural Networks for Hyperspectral Image Classification”, Hu et al., 30 July 2015) discloses a classifier which contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB Z SUSSMAN MOSS whose telephone number is (571) 272-1579. The examiner can normally be reached Monday - Friday, 9 a.m. - 5 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.S.M./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Feb 25, 2022
Application Filed
Jun 12, 2025
Non-Final Rejection — §103, §112
Sep 08, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103, §112
Jan 28, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
14%
Grant Probability
-6%
With Interview (-20.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month