DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a Judicial Exception in the form of an Abstract Idea, without significantly more:
Beginning with independent claim 1, a process claim, which recites:
A fast medical hyperspectral image (MHSI) classification method based on similarity tangent mapping, comprising: preprocessing a to-be-classified MHSI; extracting a sample set from the to-be-classified MHSI; dividing the sample set into a training sample set and a test sample set; constructing a cosine similarity tangent mapping (CSTM) model based on the training sample set; and inputting the test sample set into the CSTM model to obtain a classification result.
The claim recites abstract ideas:
A process that encompass a human performing the steps mentally with or without a physical aid in the form of the “inputting” steps, with the “preprocessing” step, “extracting” step, “dividing” step and “constructing” step being pre-solution acts of processing information which could be performed visually and/or mentally; and
A method of organizing human behavior in the form of a social activity of following rules or instructions informing a person to perform the “preprocessing” step, “extracting” step, “dividing” step, “constructing” step and “inputting” step.
These two abstract ideas will be considered together for analysis as a single abstract idea per MPEP 2106:
PNG
media_image1.png
468
1527
media_image1.png
Greyscale
This judicial exception is not integrated into a practical application because there are no recited additional elements that amount to a practical application, such as but no limited to the following as noted in MPEP 2106:
PNG
media_image2.png
453
1451
media_image2.png
Greyscale
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reason: There are not additional elements other than the abstract idea.
Independent claim 1 is merely a generic computer implementation of the abstract ideas and likewise do not amount to significantly more. See MPEP 2106:
PNG
media_image3.png
249
1434
media_image3.png
Greyscale
Likewise, the following dependent claims have been analyzed and do not recite elements that recite a practical application or significantly more and remain rejected under 35 USC 101: Claims 2-8.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and further in view of Gen-yun’137 (CN 113298137).
With respect to claim 1, Medical Hyperspectral Image Classification Based
on End-to-End Fusion Deep Neural Network teaches a fast medical hyperspectral image (MHSI) classification method based on similarity tangent mapping (Fig.1 and page 4483), comprising:
preprocessing a to-be-classified MHSI (Fig.1 and page 4483);
extracting a sample set from the to-be-classified MHSI [The Two-Chanel Feature extraction is considered to extract a sample set from the to-be-classified MHSI to feed them to the feature fusion module when the system shown in Fig.1 is being trained (Fig.1, and page 4483)];
constructing a cosine similarity tangent mapping (CSTM) model based on the training sample set [the mathematical representation is denoted as α = f (z) = f (Wx + b), where W represents the weight matrix trained in the process of learning and b is a bias vector, f is a function to transform data into nonlinear space, such as tanh function and rectified linear function, z denotes the summation of the products of x and weight W, along with the bias vector b (page 4483). In addition, it was well known in the art that the tanh (hyperbolic tangent) function is closely related to the cosh (hyperbolic cosine) function as its denominator: tanh(x) = sinh(x) / cosh(x) (pages 4483 and 4484)];
inputting the test sample set into the CSTM model to obtain a classification result (Fig.1)
Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network does not teach dividing the sample set into a training sample set and a test sample set.
Gen-yun’137 teaches dividing the sample set into a training sample set and a test sample set [the sample data is randomly divided into k equal parts, the k-1 data in turn as training 1 part as a test (page 5)];
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network according to the teaching of Gen-yun’137 to divide the sample set into a training sample set and a test sample set because this will allow the system of two-channel architecture for MHSI classification to be trained more effectively.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137 (CN 113298137) and further in view of Huan-huan’682 (CN 110472682) and Cell Classification Using, Convolutional Neural Networks in Medical Hyperspectral Imagery.
With respect to claim 2, which further limits claim 1, the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and Gen-yun’137 does not teach wherein the preprocessing a to-be-classified MHSI comprises: obtaining a pixel quantity, a spectral dimension, and a tissue category of the to-be-classified MHSI; and normalizing, band by band, a spectral value corresponding to each pixel.
Huan-huan’682 teaches obtaining a pixel quantity (page 4), a category of the to-be-classified MHSI (pages 7 and 8); and
normalizing, band by band, a spectral value corresponding to each pixel (page 9).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and Gen-yun’137 according to the teaching of Huan-huan’682 to classified an image according to a tissue category of the to-be-classified MHSI because this will allow the medical hyperspectral image to be classified more effectively.
The combination of the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137 and Huan-huan’682 does not teach obtaining a spectral dimension.
Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches obtaining a spectral dimension (page 503).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137 and Huan-huan’682 according to the teaching of Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches to obtain the spectral dimension of the medical hyperspectral image because this will allow the medical hyperspectral image to be classified more effectively.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137 (CN 113298137) and further in view of Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery.
With respect to claim 3, which further limits claim 1, the combination of the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and Gen-yun’137 (CN 113298137) does not teach wherein the extracting a sample set from the to-be-classified MHSI comprises: extracting a sample quantity and a tissue category label of each sample.
Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches wherein the extracting a sample set from the to-be-classified MHSI comprises: extracting a sample quantity and a tissue category label of each sample (Fig.1 and page 502).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and Gen-yun’137 according to the teaching of Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches to obtain the spectral dimension of the medical hyperspectral image because this will allow the medical hyperspectral image to be classified more effectively.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gen-yun’137 (CN 113298137), Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and further in view of HUI’566 (CN 108256566) and Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery.
With respect to claim to claim 4, which further limits claim 1, the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and yun’137 does not teach wherein the constructing a CSTM model based on the training sample set comprises: calculating a cosine similarity between a to-be-classified pixel and each training sample in feature space to constitute a cosine similarity matrix; performing tangent mapping for the cosine similarity matrix; calculating a similarity between the to-be-classified pixel and a training sample of each different tissue category by combining a spatial neighborhood; and allocating a label to the to-be-classified pixel based on a highest similarity.
HUI’566 teaches wherein the constructing a CSTM model based on the training sample set comprises: calculating a cosine similarity between a to-be-classified pixel and each training sample in feature space to constitute a cosine similarity matrix [calculating the to-be-identified image to matching similarity value and stores the template image match target from the matching starting point, the cosine similarity function according to pixel point (page 8)];
performing tangent mapping for the cosine similarity matrix [calculating the to-be-identified image to matching similarity value and stores the template image match target from the matching starting point, the cosine similarity function according to pixel point (page 8). The tangent mapping for the cosine similarity matrix is considered being performed when the similarity value between the identified image and stores the template image is being determined since it was well known in the art that the tanh (hyperbolic tangent) function is closely related to the cosh (hyperbolic cosine) function as its denominator: tanh(x) = sinh(x) / cosh(x)];
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network and yun’137 according to the teaching of HUI’566 to calculate a cosine similarity between the medical hyperspectral image and the sample set to constitute a cosine similarity matrix because this will allow the medical hyperspectral image to be classified more effectively.
The combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, yun’137 and HUI’566 does not teach calculating a similarity between the to-be-classified pixel and a training sample of each different tissue category by combining a spatial neighborhood; and allocating a label to the to-be-classified pixel based on a highest similarity.
Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches calculating a similarity between the to-be-classified pixel and a training sample of each different tissue category by combining a spatial neighborhood [neighboring pixels in MHSI tend to belong the same class
and extract a large number of small regions (a central pixel
with its neighbors (page 502)]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize to combining a spatial neighborhood to calculate a similarity between the pixel to be classified and a training sample of each different tissue category because this will allow the pixel to be classified to be identified more effectively]; and
allocating a label to the to-be-classified pixel based on a highest similarity [the pixel to be classified is being classified according the deep CNN-based method based on a highest similarity (abstract and page 504)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137and HUI’566 according to the teaching of Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery teaches to classify the pixel according the deep CNN-based method based on a highest similarity because this will allow the pixel to identified more effectively.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137 (CN 113298137), Huan-huan’682 (CN 110472682), Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery and further in view of Thamm’966 (US 10,760,966).
With respect to claim 7, which further limits claim 2, the combination of Medical Hyperspectral Image Classification Based
on End-to-End Fusion Deep Neural Network, Gen-yun’137, Huan-huan’682 and Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery does not teach wherein the normalizing, band by band, a spectral value corresponding to each pixel comprises: obtaining a first difference between the pixel and a minimum spectral value of a corresponding band; obtaining a second difference between a maximum spectral value of the corresponding band of the pixel and the minimum spectral value; and normalizing the spectral value of the pixel into a ratio of the first difference to the second difference.
Thamm’966 teaches wherein the normalizing, band by band, a spectral value corresponding to each pixel comprises: obtaining a first difference between the pixel and a minimum spectral value of a corresponding band [calculating function values according to at least three functions of said signal values, wherein said at least three functions are selected from: (A) mean; (B) variance; (C) skewness; (D) kurtosis; (E) moving average with a window width of 3 points; (F) second derivative; (G) median; (H) maximum; (I) minimum; (J) difference between maximum and minimum; (K) quotient of the central signal value and the mean value; (L) quotient (maximum-mean value)/(maximum-minimum) (claim 1)];
obtaining a second difference between a maximum spectral value of the corresponding band of the pixel and the minimum spectral value [calculating function values according to at least three functions of said signal values, wherein said at least three functions are selected from: (A) mean; (B) variance; (C) skewness; (D) kurtosis; (E) moving average with a window width of 3 points; (F) second derivative; (G) median; (H) maximum; (I) minimum; (J) difference between maximum and minimum; (K) quotient of the central signal value and the mean value; (L) quotient (maximum-mean value)/(maximum-minimum) (claim 1)]; and
normalizing the spectral value of the pixel into a ratio of the first difference to the second difference [normalizing said calculated function values by division by the mean value of the corresponding function values over all of said spectral points (claim 1)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network, Gen-yun’137, Huan-huan’682, Cell Classification Using Convolutional Neural Networks in Medical Hyperspectral Imagery according the teaching of Thamm’966 to include the calculated functions for calculating the difference between the pixel and a minimum spectral value of a corresponding band because this will allow the medical hyperspectral image to be classified more effectively.
Examiner Note
Claims 5 and 6 would be allowable if their associated 35 U.S.C. 101 rejections are being overcome and rewritten in independent form including all of the limitations of their base claim and any intervening claims.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUO LONG CHEN whose telephone number is (571)270-3759. The examiner can normally be reached on M-F 9am - 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tieu, Benny can be reached on (571) 272-7490. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUO LONG CHEN/Primary Examiner, Art Unit 2682