Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted 12/14/2023 has been considered by the examiner and made of record in the application file.
Specification
Applicant is reminded of the proper content of an abstract of the disclosure.
A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art.
If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives.
Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps.
Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length.
See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.
The abstract of the disclosure is objected to because the abstract does not clearly set forth the technical disclosure of the invention, particularly the improvement over existing processing systems. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 14-16, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lingxuan Zhu et al. (“SpectralMAE: Spectral Masked Autoencoder for Hyperspectral Remote Sensing Image Reconstruction.)
Regarding claim 1, Lingxuan et al. teaches a computer implemented method comprising: encoding one or more instance of a received image with spectral mask data, wherein the spectral mask data specifies spectral information of the received image to be masked; training one or more predictive model in dependence on the encoding; querying the one or more predictive model with a query image; and performing processing in dependence on an output from the querying. (Pg. 3 and 4; SpectralMAE reconceptualizes the spectral reconstruction problem based on the mask-then-predict strategy. This model leverages the power of autoencoder and masked modeling to provide a more robust and flexible solution for hyperspectral image reconstruction. SpectralMAE models the spectral image reconstruction problem as predicting masked patches from the visible ones. As shown in Figure 3, the input spectral images are processed as visible patches, while the bands to be predicted are processed as masked patches. The visible patches are first fed into the encoder, and then the decoder uses the outputs of the encoder as well as the masked patches to reconstruct the complete spectral image.)
Regarding claim 2, Lingxuan et al. teaches wherein the output from the querying includes output prediction data specifying missing spectral information, and wherein the performing processing includes examining the prediction data, and transforming the query image into a formatted spectrally enhanced image based on the examining. (Pg. 7; The decoder’s output is reshaped to form reconstructed patches, and the last layer of the decoder is a linear projection followed by a hyperbolic tangent function (tanh). Finally, the patches at different spatial locations were merged into reconstructed hyperspectral images.)
Regarding claim 14, Lingxuan et al. teaches wherein the method is characterized by one or more of the following selected from the group consisting of: (a) the received image is a satellite spectral image, (b) the received image is defined by an XxY pixel array in which pixel intensity values for respective pixels of the array are provided for M channels, (c) the received image includes M channels, and (d) the received image includes M channels, and wherein the spectral mask data specifies selective masking of a subset of the M channels. (Pg. 10; The HyRANK satellite hyperspectral data were obtained from the Hyperion sensor. In the dataset, 148 useful bands were selected through spectral sampling, and 1152 training samples were included. With these two datasets, we evaluated the model’s performance on satellite-based hyperspectral remote sensing images.)
Regarding claim 15, Lingxuan et al. teaches wherein the encoding one or more instance of a received image with spectral mask data includes encoding a first instance of the received image with first spectral mask data that specifies selective masking of a first channel of the received image, and wherein the encoding one or more instance of the received image with spectral mask data includes encoding a second instance of the received image with second spectral mask data that specifies selective masking of a second channel of the received image. (Pg. 4: The SpectralMAE model employs a two-stage training process, with pre-training using random masks followed by fine-tuning with fixed masks. By incorporating a positional encoding strategy for the spectral dimension and a transformer network architecture, the model is able to effectively capture and differentiate spectral information, even in the presence of masked positions.)
Regarding claim 16, Lingxuan teaches wherein the encoding one or more instance of a received image with spectral mask data includes encoding a first instance of the received image with first spectral mask data that specifies selective masking of a first channel of the received image, and wherein the encoding one or more instance of a received image with spectral mask data includes encoding a second instance of the received image with second spectral mask data that specifies selective masking of a second channel of the received image, wherein the training the one or more predictive model in dependence on the encoding includes applying a first training dataset to a foundation model with the first channel masked, and applying a second training dataset for the foundation model with the second channel masked. (Pg. 8: During training, there are two masking strategies used: random masking and fixed masking, as depicted in Figure 5. In the spectral dimension, the random masking strategy removes a certain percentage of input patches based on a uniform distribution. The fixed masking strategy, on the other hand, only masks the bands that need to be reconstructed.)
Claim 19 recites a system with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim.
Claim 20 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Lingxuan Zhu et al. (“SpectralMAE: Spectral Masked Autoencoder for Hyperspectral Remote Sensing Image Reconstruction.) in view of Danfeng Hong et al. (SpectralGPT: Spectral Remote Sensing Foundation Model).
Regarding claim 3, Hong et al. teaches wherein the output from the querying includes an output one or more prediction label, and wherein the performing processing includes examining the one or more prediction label, and recognizing a condition based on the examining. (Pg. 7; The pretrained model’s encoder serves as the foundational backbone, and its output is subject to an average pooling layer to generate predictions. We quantitatively assess the performance of pretrained
foundation models across four downstream tasks in terms of recognition accuracy for the single-label RS scene classification task, macro and micro mean average precision (mAP), i.e., macro-mAP (micro-mAP), for the multi-label RS scene classification task, overall accuracy (OA) and mean intersection over union (mIoU) for the semantic segmentation task, and precision, recall, and F1 score for the change detection.)
Regarding claim 4, Hong et al. teaches wherein the output from the querying includes a plurality of pixel specific prediction labels, and wherein the performing processing includes examining pixel specific prediction labels of the plurality of pixel specific prediction labels, and recognizing a condition based on the examining. (Pg. 8; Our SpectralGPT (SpectralGPT+) outperforms all others, exhibiting a significant lead with a 1.1% (2.3%) higher mIoU than the second-best result (i.e., SatMAE). Fig. 6(a) offers a visual depiction of the Munich area under study for the segmentation task, along with the proportions of the 13 classes. For the semantic segmentation task, we create a new SegMunich dataset, which is derived from the Sentinel-2 spectral satellite.)
A person having ordinary skill in the art would have been motivated to apply the downstream recognition techniques of Hong et al. to the spectrally masked and trained predictive model of Lingxuan Zhu et al. in order to enable condition recognition and labeling based on the model outputs, as such use of pretrained spectral models for recognition tasks was well known, yields predictable results, and represents a routine and logical extension of spectral image reconstruction models to practical remote sensing applications. Further, combining spectral reconstruction or enhancement techniques with classification and segmentation tasks would have been expected to improve recognition performance, thereby providing an additional motivation to a known spectral modeling framework.
Allowable Subject Matter
Claims 5-13, 17 and 18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAVEN S. JONES whose telephone number is (571)272-7759. The examiner can normally be reached M-Th 7:00a.m. - 5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at 571-438-5758. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAVEN SIMONE JONES/Examiner, Art Unit 2665
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665