DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
1. The information disclosure statements (IDS) submitted on 12/11/2025, 01/07/2025, and, 04/09/2024 have been considered by the examiner.
Claim Rejections - 35 USC § 101
2. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claims 1, 8-9 and 12-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mathematical calculations/formula. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The following reasons are provided to evaluate subject matter eligibility.
(1) Are the claims directed to a process, machine, manufacture or composition of matter;
(2A) Prong One: Are the claims directed to a judicially recognized exception, i.é., a law of nature, a natural phenomenon, or an abstract idea;
Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application;
(2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept.
With regard to (1), the analysis is a ‘yes’, claim 1 recites a process, claim 12 recites a machine; and claim 13 recites a manufacture.
With regard to (2A) Prong One, the analysis is a “yes”. Claims 1, 12 and 13 recite “acquiring parameters of the core feature extraction layer and the global information feature extraction layer by training based on a first scene label of a sample image and a standard cross- entropy loss; training a weight parameter of the LCS module of each level, based on a loss value acquired by performing a pixel-by-pixel calculation on a feature map output from the LCS module of each level and the first scene label of the sample image; and acquiring a parameter of the fully-connected decision layer by training based on the first scene label of the sample image and the standard cross-entropy loss.”. When viewed under the broadest most reasonable interpretation the claim recites mathematical formula or calculations. The steps of “acquiring parameter(s)” and “training a weight parameter” are simply seen as reciting mathematical formulas or calculations, as they rely on standard cross-entropy loss and pixel-by-pixel calculations to acquire and updating network parameters/values, which are mathematical concepts, in nature. Standard cross-entropy loss and pixel-by-pixel calculations are nothing but mathematical operations, and these mathematical operations are generic operations. Thus, the claim recites mathematical calculations.
With regard to (2A) Prong Two: the analysis is a “No”. Claims 1, 12 and 13 recites the additional elements of “the scene recognition model comprising a core feature extraction layer, a global information feature extraction layer connected to the core feature extraction layer, a local supervised learning (LCS) module of at least one level with an attention mechanism, and a fully-connected decision layer”. These layers are generic layers of a neural network, that are necessary for use of the recited abstract idea. Therefore, the limitation(s) is/are insignificant extra-solution activity, See MPEP 2106.05(1). The claims further recite the additional elements of “processor(s)” and “computer readable medium” at a high level of generality. The addition of insignificant extra-solution activity does not amount to an inventive concept. See MPEP 2106.05(g). Moreover, the additional elements do not reflect an improvement to a technology or technical field, include the use of a particular machine or particular transformation. The claim as a whole, looking at the additional elements individually and in combination, does not integrate the abstract idea into a practical application.
With regard to (2B): Claims 1, 12 and 13 recites the additional elements, the neural networks layers, as cited before. These layers are generic layers of a neural network, that is necessary for use of the recited abstract idea. Therefore, the limitation(s) is/are insignificant extra-solution activity, See MPEP 2106.05(1). The claims further recite the additional elements of “processor(s)” and “computer readable medium” at a high level of generality. The addition of insignificant extra-solution activity does not amount to an inventive concept. See MPEP 2106.05(g). Moreover, the additional elements do not reflect an improvement to a technology or technical field, include the use of a particular machine or particular transformation. The claim as a whole, looking at the additional elements individually and in combination, does not integrate the abstract idea into a practical application.
Similarly, claim 8 recites a generic method of using a pre-trained model for scene recognition, which takes in image and reads image information, which is generic in nature, and do not reflect an improvement to a technology or technical field, include the use of a particular machine or particular transformation. The claim as a whole, looking at the additional elements individually and in combination, does not integrate the abstract idea into a practical application.
Claims 9 and 14-15 are similarly rejected for the same reasons as claim 8. Dependent claims 9 and 14-15 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are rejected for the same reasons and not repeated herewith.
4. Claims 2-7, 16-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The closest prior art(s) of record as cited (e.g. Lv et al., 2019, “An End-to-End Local-Global-Fusion Feature Extraction Network for Remote Sensing Image Scene Classification” (pp. 1-20); and, LI et al., U.S. Patent Publication No. 2020/0210773 A1) teach both global and local extraction of features from image for scene recognition, but do not teach “training a weight parameter of the LCS module of each level, based on a loss value acquired by performing a pixel-by-pixel calculation on a feature map output from the LCS module of each level and the first scene label of the sample image, with LCS module of at least one level with an attention mechanism” as recited in claims 1, 12 and 13, in combination with other limitations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANAV SETH whose telephone number is (571)272-7456. The examiner can normally be reached on Monday to Friday from 8:30 am to 5:00 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Sumati Lefkowitz, can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000
/MANAV SETH/Primary Examiner, Art Unit 2672 March 8, 2026