Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,210

OBJECT-ORIENTED METHOD FOR IDENTIFYING AND CLASSIFYING SURFACE LITHOLOGY IN HYPERSPECTRAL REMOTE SENSING IMAGE

Non-Final OA §103§112
Filed
Apr 12, 2024
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Chengdu University Of Technology
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-7 are pending for examination in the Application No. 18/634,210 filed April 12th, 2024. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as foreign Patent Application No. CN202311811056.0, filed on December 26th, 2023. Claim Interpretation Additional Claim Interpretations: Regarding claim 1, the first two lines of the claim recite the phrase “…for identifying and classifying surface lithology in a hyperspectral remote sensing image…”. This phrase is merely an intended use/result limitation and not a functional or structural requirement of the claim (see MPEP § 2114, subsection II). Thus, this limitation in this claim will be interpreted as reciting intended use/result and will not be interpreted as a functional or structural requirement of the claim. Regarding claim 1, the last two lines of the claim recite the limitation “…the fused feature is used to represent a surface lithology type of the hyperspectral remote sensing image to be tested”. The phrase “to be tested” is merely an intended use/result and not a functional or structural requirement of the claim. Therefore, “the hyperspectral remote sensing image to be tested” in this claim limitation will be interpreted as merely a “hyperspectral remote sensing image” in the claim(s). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 3-5 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 3, the claim limitations regarding a “test parameter” and a “validation parameter” (i.e., “the test parameter comprises a ratio of a pixel participating in testing to all the pixels in the original image, and a quantity of pixels with annotations in each test block” and “the validation parameter comprises a ratio of pixels participating in validation to all the pixels in the original image, and a quantity of pixels with annotations in each validation block”) fails to comply with the written description requirement. It is unclear to the examiner where these claim limitations regarding a “test parameter” and a “validation parameter” is supported in Applicant’s instant Specification. The examiner respectfully notes that there does not appear to be a written description of “the test parameter comprises a ratio of a pixel participating in testing to all the pixels in the original image, and a quantity of pixels with annotations in each test block” and “the validation parameter comprises a ratio of pixels participating in validation to all the pixels in the original image, and a quantity of pixels with annotations in each validation block” in the application as filed (See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007); and MPEP § 2163.04). The examiner respectfully notes that the specification merely recites “parameters” regarding a “ratio” with respect to “pixels participating in training” (i.e., the “training parameter” as recited in claim 1) as disclosed in paragraph [0046] of Applicant’s instant specification and does not mention a ratio of pixels regarding the pixels participating in “testing” and/or “validation” (i.e., the “test parameter” and a “validation parameter” as recited in claim 1). Applicant is respectfully encouraged to specifically point out support for the claim limitations regarding the “test parameter” and “validation parameter” recited in the claims. Claim(s) 4-5 do not resolve or clarify this/these issue(s) and thus is/are similarly rejected under 35 U.S.C. 112(a) for the same reasons as above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Zhang; CN 115115742 A) in view of Li et al. (Li; “Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network,” 2020), and further in view of Zou et al. (Zou; “DA-IMRN: Dual-Attention-Guided Interactive Multi-Scale Residual Network for Hyperspectral Image Classification,” 2022). Regarding claim 1, Zhang discloses an object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image (description, para(s). [n0006], recite(s) [n0006] “The purpose of this invention is to provide a rapid mineral mapping method for aerial hyperspectral remote sensing images. …” ), comprising: determining a hyperspectral remote sensing image in a research area and a lithology type label corresponding to each hyperspectral remote sensing image, and preparing a hyperspectral remote sensing dataset (description, para(s). [n0004] and [n0086], recite(s) [n0004] “In terms of data acquisition, the SASI airborne hyperspectral measurement system introduced by the State Key Laboratory of Remote Sensing Information and Image Analysis Technology of Beijing Research Institute of Geology, Nuclear Industry, serves as an example. …” [n0086] “Python was used to read the model and the preprocessed SASI aerial hyperspectral image of the Baiyanghe mining area to obtain the mapping results. The results were then exported into a map using the gdal library, generating vector files for each type of mineral. The images were then visualized on the ArcGIS platform, as shown in Figure 7. The vector codes extracted from each type of mineral are consistent with the category codes of each mineral in the sample. Different colors were assigned to different vector codes, which were then overlaid on the aerial hyperspectral image base map to complete the mineral mapping.” , where a “mining area” is a research area and the “aerial hyperspectral image[s]” are a hyperspectral remote sensing dataset); dividing pixels of the hyperspectral remote sensing image in the hyperspectral remote sensing dataset into a training set and a test set through a division strategy(description, para(s). [n0017], recite(s) [n0017] “The specific steps (1) are as follows: using Python to read in the binary image files of the measured spectral samples and image spectral samples together with the preprocessed airborne hyperspectral image reflectance, extract the samples, generate a matrix dataset, i.e., a sample library file, and divide it into a training set and a test set in an 8:2 ratio, generating training set files and test set files.” , where “divid[ing]” the “airborne hyperspectral image[s]” into “a training set and a test set” is a dividing strategy dividing pixels (e.g., “image spectral samples”) of a hyperspectral remote sensing dataset into a training set and a test set); and based on a deep learning method, extracting and fusing, through a double-branch(description, para(s). [n0020-n0024], recite(s) [n0020] “Step (3) includes: ” [n0021] “Step (3.1): Use a one-dimensional convolutional neural network to perform convolution and pooling operations on the spectral information to extract spectral features; ” [n0022] “Step (3.2): Use a two-dimensional convolutional neural network to perform convolution and pooling operations on spatial information to extract spatial features; ” [n0023] “Step (3.3): Overlay and fuse the extracted spatial features and spectral features; ” [n0024] “Step (3.4): Classify using a fully connected network.” , where the method of “Step (3)” is a double-branch dual-attention mechanism network generating a fused feature (e.g., “fuse the extracted spatial features and spectral features”) comprising a first branch of a first attention extracting a spectral feature (e.g., “one-dimensional convolutional neural network… to extract spectral features”) and a second branch of a second attention extracting a spatial feature (e.g., “two-dimensional convolutional neural network”)), wherein the double-branch(description, para(s). [n0025], recite(s) [n0025] “The specific steps (4) are as follows: use Python to read and import the sample library established in step (1), train the deep learning model based on the training set, test it based on the test set, use the cross-entropy function as the loss function, calculate the training accuracy and test accuracy or loss, select the best number of training times and model parameters by observing the test accuracy and training loss curves, finally obtain the model parameters, and save the model.” ); the double-branch(description, para(s). [n0020-n0024]—see citations in claim limitation “based on a deep learning method…” above—, where para. [n0059] further recite(s): [n0059] “Step (3.4): Classify using a fully connected network. Two fully connected layers are set up to classify the fused features. The first layer has 128 channels, and the number of channels in the second layer is the number of sample categories, i.e., the number of categories to be classified.” , where the “one-dimensional convolutional neural network… to extract spectral features” is a spectral branch and the “two-dimensional convolutional neural network” is a second branch); the spectral branch comprises(description, para(s). [n0021]—see citation in claim limitation “based on a deep learning method…” above—, where description, para(s). further [n0052] recite(s): [n0052] “Step (3.1): Use a one-dimensional convolutional neural network to perform convolution and pooling operations on the spectral information to extract spectral features; the spectra in the sample library can be regarded as one-dimensional vectors. The purpose of setting up a one-dimensional convolutional network is to calculate and extract the spectral features of the samples through the one-dimensional convolutional network. …” , where the “convolution and pooling operations on the spectral information to extract spectral features” is a spectral attention module in the spectral branch extracting a diagnostic feature in the spectral feature (e.g., the “extract[ed] spectral features”)); the spatial branch comprises(description, para(s). [n0022]—see citation in claim limitation “based on a deep learning method…” above—, where description, para(s). [n0055], further recite(s): [n0055] “Step (3.2): Use a two-dimensional convolutional neural network to perform convolution and pooling operations on spatial information to extract spatial features; the images in the sample library can be regarded as matrices. The purpose of setting up a two-dimensional convolutional network is to extract sample images or spatial features through the two-dimensional convolutional network. …” , where the “convolution and pooling operations on spatial information to extract spatial features” is a spatial attention module in the spatial branch extracting a diagnostic feature in the spatial feature (e.g., the “extract[ed] spatial features”)); the classification head is used to fuse the diagnostic spectral feature extracted by the spectral branch and the diagnostic spatial feature extracted by the spatial branch, to generate the fused feature (description, para(s). [n0023-0024]—see citations in claim limitation “based on a deep learning method…” above—, where description, para(s). [n0058] and [n0059] further recite(s): [n0058] “Step (3.3): After extracting the spectral and spatial features of the image respectively (the spectrum is a one-dimensional vector, and the image is a matrix. The one-dimensional convolutional neural network and the two-dimensional convolutional neural network extract the spectral and spatial features respectively. The extracted features have become new vectors and matrices), the Concat method in Python is used to superimpose and fuse the extracted spatial features and spectral features (Concat mainly transforms the extracted spatial features into one dimension and concatenates them with the one-dimensional spectral features).” , where steps 3.3 and 3.4 are a classification head used to generate a fused feature fuse the diagnostic spectral feature and the diagnostic spatial feature “fus[ing] the extracted spatial features and spectral features” ); and the fused feature is used to represent a surface lithology type of the hyperspectral remote sensing image to be tested (description, para(s). [n0059], recite(s) [n0059] “Step (3.4): Classify using a fully connected network. Two fully connected layers are set up to classify the fused features. The first layer has 128 channels, and the number of channels in the second layer is the number of sample categories, i.e., the number of categories to be classified.” , where the fused feature are classified based on a “number of sample categories” is the fused feature representing a classification category type; wherein para(s). [n0086]—see citation in claim limitation “determining a hyperspectral remote sensing image…” above—further recite(s) the classification category types are surface lithology types (e.g., “type of mineral”)). Where Zhang does not specifically disclose based on a deep learning method, extracting and fusing, through a double-branch a densely connected module and a spectral attention mechanism module, and the spectral branch is used to extract a diagnostic spectral feature in the spectral feature; the spatial branch comprises a spatial densely connected module and a spatial attention mechanism module, and the spatial branch is used to extract a diagnostic spatial feature in the spatial feature; the classification head is used to fuse the diagnostic spectral feature extracted by the spectral branch and the diagnostic spatial feature extracted by the spatial branch, to generate the fused feature; Li teaches in the same field of endeavor hyperspectral image classification based on double-branch dual-attention mechanism network based on a deep learning method, extracting and fusing, through a double-branchdual-attention mechanism network, a spectral feature and a spatial feature of a hyperspectral remote sensing image to be tested, to generate a fused feature (section 3.1 on pg. 8, recite(s) [3.1. The Framework of the DBDA Network] “The whole structure of the DBDA network can be seen in Figure 6. For convenience, we call the top branch Spectral Branch and name the bottom branch Spatial Branch. The input is fed into spectral branch and spatial branch respectively to get the spectral feature maps and spatial feature maps. Then the fusion operation between spectral and spatial feature maps are adopted to get the classification results.” , where the “DBDA” is a double-branch dual-attention mechanism network extracting and fusing a spectral and spatial feature of a hyperspectral remote sensing image “input” to be tested to generate a fused feature (e.g., “fusion operation between spectral and spatial feature maps”)), wherein the double-branch(section 4 on pg. 12, recite(s) [4. Experimental Results] “...For each dataset, a certain number of training samples and validation samples are randomly selected from the labelled data on a certain percentage, and the rest of the samples are used to test the performance of the model. …” , where the “training samples” and “rest of the samples… used to test the performance of the model” are a training set and a test set, respectively); the double-branch(section 3.1 on pg. 8 recites a “Spectral Branch” and a “Spatial Branch”—see citation in claim limitation “based on a deep learning method…” above—; where section 3.1.3 on pg. 10 further recite(s): [3.1.3. Spectral and Spatial Fusion for HSI Classification] “With the spectral branch and spatial branch, several spectral feature maps and spatial feature maps are obtained. Then, we perform a concatenation between two features for classification. Moreover, the reason why the concatenation operation is applied instead of add operation is that the spectral and spatial features are in the irrelevant domains, and the concatenate operation could keep them independent while the add operation would mix them together. In the end, the classification result is obtained via the fully connected layer and the softmax activation function.” , where the “concatenate operation” is at least a classification head); the spectral branch comprises a densely connected module and a spectral attention mechanism module, and the spectral branch is used to extract a diagnostic spectral feature in the spectral feature (section 3.1.1 on pg. 8, recite(s) [3.1.1. Spectral Branch with the Channel Attention Block] “First, a 3D-CNN layer with a 1×1×7 kernel size is used. The down sampling stride is set to (1,1,2), which could reduce the number of bands. Then, feature maps in the shape of (9×9×97, 24) are captured. After that, the dense spectral block combined by 3D-CNN with BN is attached. Each 3D-CNN of the dense spectral block has 12 channels with a 1×1×7 kernel size. After attaching the dense spectral block, the channels of feature maps increase to 60 calculated by Equation (5). Therefore, we obtain feature maps with size of (9×9×97, 60). Next, after the last 3D-CNN with kernel size of 1×1×97, a (9×9×1, 60) feature map is generated. However, the 60 channels make different contributions to the classification. To refine the spectral features, the channel attention block illustrated in Figure 4a and explained in Section 2.4.1 is adopted. The channel attention block reinforces the informative channels and whittles the information-lacking channels. After obtaining the weighted spectral feature maps by channel attention, a BN layer and a dropout layer are applied to enhance the numerical stability and vanquish the overfitting. Finally, via a global average pooling layer, the feature maps in the shape of 1×60 are obtained. …” , where the “dense spectral block” and “channel attention block” are a densely connected module and a spectral attention mechanism module, respectively, in the spectral branch extracting a diagnostic feature in the spectral feature (e.g., “spectral feature maps”)); the spatial branch comprises a spatial densely connected module and a spatial attention mechanism module, and the spatial branch is used to extract a diagnostic spatial feature in the spatial feature (section 3.1.2 on pg. 9, recite(s) [3.1.2. Spatial Branch with the Spatial Attention Block] “Meanwhile, the input data in the shape of 9×9×200 are delivered to the spatial branch, and the initial 3D-CNN layer’s size is set to 1×1×200, which can compress spectral bands into one dimension. After that, feature maps in the shape of (9×9×1,24) are obtained. Then, the dense spatial block combined by 3D-CNN with BN is attached. Each 3D-CNN in the dense spectral block has 12 channels with a 3×3×1 kernel size. Next, the extracted feature maps in the shape of (9×9×1,60) are fed into the spatial attention block, as illustrated in Figure 4b and expounded in Section 2.4.2. With the attention block, the coefficient of each pixel is weighted to get a more discriminative spatial feature. After capturing the weighted spatial feature maps, a BN layer with a dropout layer is applied. Finally, the spatial feature maps in the shape of 1×60 are obtained via a global average pooling layer. …” , where the “spatial attention block” is a spatial attention mechanism module and the “dense spatial block” is a spatial densely connected module in the spatial branch extracting a diagnostic feature in the spatial feature (e.g., “spatial feature maps”)); the classification head is used to fuse the diagnostic spectral feature extracted by the spectral branch and the diagnostic spatial feature extracted by the spatial branch, to generate the fused feature (section 3.1.3 on pg. 10—see citation in claim limitation “the double-branch multi-scale dual-attention mechanism network …” above—, where the “concatenation between two features for classification” (i.e., “spectral and spatial features”) is a fused feature generated by the classification head). Since Zhang and Li each disclose a double-branch dual-attention mechanism network comprising a spectral branch comprising a spectral attention mechanism module, a spatial branch comprising a spatial attention mechanism module, and a classification head in the same field of endeavor of hyperspectral image classification, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to substitute the double-branch dual-attention mechanism network of Zhang with the double-branch dual-attention mechanism network of Li to improve the extraction of spectral and spatial features for hyperspectral image classification while still yielding the predictable result of performing classification in a hyperspectral image as taught by Li (abstract, recite(s) [abstract] “In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. …” ). Where Zhang in view of Li does not specifically disclose dividing pixels of the hyperspectral remote sensing image in the hyperspectral remote sensing dataset into a training set and a test set through a division strategy for a dataset without leakage information, wherein the division strategy for a dataset without leakage information ensures that there is no overlap between training data in the training set and test data in the test set;… …a double-branch multi-scale dual-attention mechanism network…; the double-branch multi-scale dual-attention mechanism network comprises a spectral branch, a spatial branch…; the spectral branch comprises a multi-scale spectral residual attention (MSeRA) densely connected module…, and the spectral branch is used to extract a diagnostic spectral feature in the spectral feature; Zou teaches in the same field of endeavor hyperspectral image classification based on double-branch dual-attention mechanism network dividing pixels of the hyperspectral remote sensing image in the hyperspectral remote sensing dataset into a training set and a test set through a division strategy for a dataset without leakage information, wherein the division strategy for a dataset without leakage information ensures that there is no overlap between training data in the training set and test data in the test set (sections 3.1 on pg. 8 and 5.1 on pg. 16, recite(s) [3.1. Dataset Partition] “...In HSI classification, the division of training/testing sets greatly affects the performance and fairness of the comparison. Therefore, a dataset partitioning strategy, which enables fair validation of new and existing algorithms without training-testing data leakage, is highly desired. Several data partitioning methods that will not lead to information leakage were proposed [4,32,35]. Among them, the dataset partition in [35] not only provides a benchmark dataset, but also avoids the loss of samples of certain classes in the training/testing sets. It divides the original image into training/validation/testing blocks and then subdivides the training/validation/testing sets, making the division method more reasonable and more suitable for practical applications. In this work, we apply the same data partition strategy as in [35] to avoid information leakage. …” [5.1. Effect of Block-Patch Size] “The block-patch size is a crucial factor for the classification results. We first divide the dataset into a few blocks with same size, including the training blocks, validation blocks and testing blocks without overlap. We slide a window with a fixed patch size within each blocks and obtain the training patches, validation patches and testing patches, corresponding to training dataset, validation dataset and testing dataset. Although the patches within one dataset might intersect with each other, there is no overlap between the patches in two different datasets, avoiding the potential information leakage as in the traditional patch-wise classification. ...” );… …a double-branch multi-scale dual-attention mechanism network… (section 2.1 on pg. 3, recite(s) [2.1 Proposed Framework] “...we proposed a dual-attention-guided interactive multi-scale residual network for HSI classification, as shown in Figure 1. In general, the proposed network consists of three key parts. First, two branches are used to extract the joint spectral and spatial features of HSIs, providing stronger feature extraction capabilities than the single-branch serial network. …Third, MSRB is constructed to extract deep multi-scale features corresponding to multiple receptive fields from limited samples.” , where the “dual-attention-guided interactive multi-scale residual network” comprising of “two branches” is a double-branch multi-scale dual-attention mechanism network extracting a spectral and spatial feature of a hyperspectral remote sensing image to be tested (e.g., “extract the joint spectral and spatial features of HSIs”)); the double-branch multi-scale dual-attention mechanism network comprises a spectral branch, a spatial branch… (section 2.1 on pg. 3—see citation in the preceding limitation immediately above—, where the “two branches… used to extract the joint spectral and spatial features of HSIs” are a spectral and spatial branch, respectively); the spectral branch comprises a multi-scale spectral residual attention (MSeRA) densely connected module…, and the spectral branch is used to extract a diagnostic spectral feature in the spectral feature (section 2.3.2 on pg. 7 and Fig. 4(a), recite(s) [2.3.2. Multi-Scale Spectral/Spatial Residual Block] “Although the recent HSI classification methods based on deep learning achieved remarkable performance, most of them only considered the features under single scale. Previous studies demonstrated the effectiveness of multi-scale features in HSI classification [46,47]. To extract spectral/spatial features at different scales, we introduce a novel multi-scale spectral/spatial module into a residual block, and employ the combination in solving classification problems. The MSRB is realized by replacing the convolutional layers with a branch structure containing three convolution kernels with different sizes. As shown in Figure 4, unlike the conventional residual unit, we employ a 1×1×1 convolutional layer and a batch normalization layer as the first component of the MSRB to unify the number of channels, downsample spectral bands, and combine information. Then, a multi-scale residual convolution group is applied to improve feature extraction. These multi-scale residual convolution groups with three convolution kernels of different sizes are used to construct feature representations of different scales. Furthermore, we employ the concatenate operation to merge the output feature maps corresponding to three scales and obtain the fused feature. Then the feature maps are passed to 1×1×1 convolutional layer to obtain consistent dimension.” PNG media_image1.png 451 1605 media_image1.png Greyscale , where a multi-scale spectral module into a residual block (i.e., a “multi-scale spectral block (MSpeRB)” as depicted in Fig. 4(a) above) is a multi-scale spectral residual attention (MSeRA) densely connected module in the spectral branch extracting a diagnostic feature in the spectral feature (e.g., spectral “features at different scales”)); Since Li and Zou each disclose a division strategy dividing pixels of a hyperspectral remote sensing image in the hyperspectral remote sensing dataset into a training set and a test set and a double-branch dual-attention mechanism network comprising a spectral branch and a spatial branch in the same field of endeavor of hyperspectral image classification, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to incorporate a division strategy dividing a dataset into a training set and a test set without leakage information and incorporate a multi-scale spectral residual attention (MSeRA) densely connected module into the system of Zhang in view of Li to result in a double-branch multi-scale dual-attention mechanism network with training and test sets without leakage information in order to improve the classification on hyperspectral images by incorporating improved training/test sets for training the double-branch dual-attention mechanism network of Li and multi-scale feature information in spectral features extracted in the double-branch dual-attention mechanism network of Li as taught by Zou above (sections 3.1 on pg. 8 and 2.3.2 on pg. 7—see citation above). Regarding claim 2, Zhang, as modified by Li and Zou, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 1, wherein Zhang further discloses the determining a hyperspectral remote sensing image in a research area and a lithology type label corresponding to each hyperspectral remote sensing image, and preparing a hyperspectral remote sensing dataset specifically comprises: taking an area in which a bedrock outcrop in the hyperspectral remote sensing image is higher than a preset outcrop area and that is not covered by vegetation as the hyperspectral remote sensing image in the research area (description, para(s). [n0044], recite(s) [n0044] “...Then, known alteration development areas are selected, and a hybrid modulation matched filtering method (which is fast and has a high detection rate) is used for mapping. The mapping results are verified pixel-by-pixel based on the diagnostic absorption peak wavelength positions and morphologies from the USGS standard mineral spectral library. Incorrect pixels are deleted, and the accuracy of the entered alteration mineral spectra is confirmed. Then, the ROI tool in ENVI software is used to delineate the sample area, outlining the range of each mineral category on the aerial hyperspectral image. The image spectra within this range serve as samples for each mineral category (preprocessed image reflectance dataset), and binary image label maps (tif or img format) are created for each mineral category.” , where selecting regions “within this range” as “samples for each category” is taking an area (e.g., “ROI”) in which a bedrock outcrop (e.g., a “mineral”) in the hyperspectral sensing image is higher than a preset outcrop area that is not covered by vegetation as the hyperspectral remote sensing image in the research area (e.g., “image spectra within this range” of “each mineral category on the aerial hyperspectral image”)); automatically performing, based on stratigraphic boundary data, label classification on pixels representing each category of lithology in the hyperspectral remote sensing image in the research area, and determining a lithology label map (description, para(s). [n0044]—see citation immediately above—, where “outlining the range of each mineral category” using “delineation” is automatically performing label classification on pixels representing each category of lithology (e.g., “mineral category”) in the hyperspectral remote sensing image in the research area (e.g., “aerial hyperspectral image”) and determining a lithology label map (e.g., “label maps… created for each mineral category”)); and preparing the hyperspectral remote sensing dataset based on the lithology label map and the hyperspectral remote sensing image (description, para(s). [n0044]—see citation in the current claim, above—, where the lithology label maps being part of "preprocessing" the hyperspectral remote sensing dataset is preparing the hyperspectral remote sensing dataset based on the lithology label map (e.g., "label maps") and the hyperspectral remote sensing image (e.g., "airborne hyperspectral data")). Regarding claim 6, Zhang, as modified by Li and Zou, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 1, wherein Zou further teaches the MSeRA densely connected module comprises a spectral-dimensional multi-scale extraction module and a residual connection module; and the MSeRA densely connected module is used to extract spectral features at different scales (section 2.3.2 on pg. 7 and Fig. 4(a)—see citations in claim limitation “the spectral branch comprises a multi-scale spectral residual attention (MSeRA) densely connected module…” above—, where the “multi-scale spectral block (MSpeRB)” as depicted in Fig. 4(a) comprises a spectral-dimensional multi-scale extraction module and a residual connection (e.g., “a multi-scale residual convolution group”)). Regarding claim 7, Zhang, as modified by Li and Zou, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 1, wherein Zhang further discloses before the determining a hyperspectral remote sensing image in a research area and a lithology type label corresponding to each hyperspectral remote sensing image, and preparing a hyperspectral remote sensing dataset, the method further comprises: performing pre-processing on the hyperspectral remote sensing image, wherein the pre-processing comprises a radiometric correction, an atmospheric correction, and a geometric correction (description, para(s). [n0044], recite(s) [n0044] “…② For image spectra, the aerial hyperspectral data first requires preprocessing such as radiometric calibration, geometric correction, atmospheric correction, spectral reconstruction, and band removal to obtain reflectance images. …” , where “radiometric calibration” is radiometric correction). Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, as modified by Li and Zou, as applied to claim 2 above, and further in view of Qu et al. (Qu; “Triple-Attention-Based Parallel Network for Hyperspectral Image Classification,” 2021). Regarding claim 3, Zhang, as modified by Li and Zou, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 2, wherein Qu further teaches in the same field of endeavor of a division strategy for a dataset without leakage information the dividing pixels of the hyperspectral remote sensing image in the hyperspectral remote sensing dataset into a training set and a test set through a division strategy for a dataset without leakage information specifically comprises: determining a training parameter, a test parameter, and a validation parameter, wherein the training parameter comprises a ratio of a pixel participating in training to all pixels in an original image and a quantity of pixels with annotations in each training block; the original image is the hyperspectral remote sensing image in the research area; the test parameter comprises a ratio of a pixel participating in testing to all the pixels in the original image, and a quantity of pixels with annotations in each test block; and the validation parameter comprises a ratio of pixels participating in validation to all the pixels in the original image, and a quantity of pixels with annotations in each validation block (section 3.1 “Data Partition” on pg. 9, recite(s) PNG media_image2.png 977 923 media_image2.png Greyscale , where the “number of training blocks in each class” is a training parameter comprising at least a ratio of a pixel participating in training to all pixels in an original image (e.g., “ratio ω t in the original image”) and a quantity of pixels with annotations in each training block (e.g., “number N p of labeled training pixels in each pre-partitioned training block”); the “original image” is a hyperspectral remote sensing image (e.g., “HSI"); where using Equation 5 to determine the number of “test blocks” is a test parameter comprising a ratio of pixels participating in testing to all pixels in the original image (e.g., ω t for test blocks) and a quantity of pixels with annotations in each test block (e.g., N p for test blocks); and where the number of “validation blocks” is a validation parameter comprising a ratio of pixels participating in validation to all the pixels in the original image (e.g., ω t for validation blocks) and a quantity of pixels with annotations in each validation block according to Equation 5 above (e.g., N p for validation blocks)); dividing the original image into a to-be-classified training image based on the ratio of a pixel participating in training to all pixels in an original image (section 3.1 “Data Partition” on pg. 9—see citation in preceding limitation immediately above—, where the “partitioning blocks” in the original image as “training blocks” is dependent on the ratio of a pixel participating in training to all pixels in an original image (e.g., “ratio ω t in the original image”) is dividing the original image into a to-be-classified training image (e.g., “training blocks”) based on said ratio); based on the quantity of pixels with annotations in each training block, randomly taking an image having the pixels with annotations from the to-be-classified training image as a training image, constructing the training set, and taking remaining images having pixels with annotations in the to-be-classified training image as a leakage image (section 3.1 “Data Partition” on pg. 9—see citation in current claim above—, where “randomly reserv[ing] N p labeled pixels in the random partitioning blocks and save the reserved blocks as training blocks” is randomly taking an image having the pixels with annotations from the to-be-classified training image as training images; the training images are constructed as a “training… set[s]”; and where the “remaining labeled pixels in the random partitioning blocks are set as leaked images” is taking remaining images having pixels with annotations in the to-be-classified training image as a leakage image); determining a test image based on the test parameter, and constructing the test set (section 3.1 “Data Partition” on pg. 9—see citation in current claim above—, where the “test image” is based on the number of “test blocks” is determining a test image based on the test parameter; where the test images are constructed as a “test… set[s]”); and determining a validation image based on the validation parameter, and constructing a validation set (para section 3.1 “Data Partition” on pg. 9—see citation in current claim above—, where the “validation patches” is based on the number of “validation blocks” is determining validation images based on the validation parameter; where the validation images are constructed as a “validation… set[s]”). Since Zou also teaches using the division strategy of Qu to divide pixels in hyperspectral remote sensing images in hyperspectral remote sensing datasets into at least a training set and a test set (section 3.1 on pg. 8—see citation in claim 1 limitation “dividing pixels of…” above—, where reference “[35}” in the phrase “apply[ing] the same data partition strategy as in [35] to avoid information leakage” of Zou is Qu), a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the division strategy taught by Zou in claim 1 above would incorporate the training, test, and validation parameters and leakage, test, and validation images as taught by Qu above. Regarding claim 4, Zhang, as modified by Li, Zou, and Qu, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 3, wherein Qu further teaches after the determining a validation image based on the validation parameter, and constructing a validation set, the method further comprises: determining all training blocks and test blocks based on the training image, the leakage image, the test image, and the validation image (section 3.1 “Data Partition” on pg. 9—see citation in claim 3 above—, where the “nonoverlapping training and test blocks” are training blocks and test blocks based on the training image, the leakage image, the test image, and the validation image (e.g., “training pixels in the original image”, “leaked image”, “test image”, and “validation” “blocks”/”patches”); and obtaining a training patch and a test patch from the training block and the test block through a sliding window strategy (section 3.1 “Data Partition” on pg. 9—see citation in claim 3 above—, where applying a “sliding window strategy to the training and validation blocks for more training and validation patches” including “partition[ing]… the test image into… test patches” is obtaining at least a training patch (e.g., “training… patches”) and a test patch (e.g., “test patches”) through a sliding window strategy). Regarding claim 5, Zhang, as modified by Li, Zou, and Qu, discloses the object-oriented method for identifying and classifying surface lithology in a hyperspectral remote sensing image according to claim 3, wherein Qu further teaches after the determining a training parameter, a test parameter, and a validation parameter, the method further comprises: determining a quantity of training blocks in each category through a formula N i = n i × λ T , wherein N i represents a quantity of training blocks corresponding to an ith category; and n i represents a total quantity of pixels in the ith category in an original image; λ represents a ratio of a pixel participating in training to all pixels in the original image; and T represents a quantity of pixels with annotations in each training block (section 3.1 “Data Partition” on pg. 9—see citation in claim 3 above—, where Equation 5 (Eq. 5)is the formula N i = n i × λ T determining the quantity of training blocks (e.g., a “number of training blocks”),; wherein N i of the formula is N b l o c k s ( c ) in Eq. 5, n i of the formula is N p i x e l s ( c ) in Eq. 5, λ of the formula is ω t in Eq. 5, and T of the formula is N p in Eq. 5). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month