Prosecution Insights
Last updated: April 19, 2026
Application No. 18/259,479

SYSTEM AND METHOD FOR LOCAL SPATIAL FEATURE POOLING FOR FINE-GRAINED REPRESENTATION LEARNING

Final Rejection §103
Filed
Jun 27, 2023
Examiner
ISLAM, MEHRAZUL NMN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Carnegie Mellon University
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
29 granted / 50 resolved
-4.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s response to the Non-final Office Action dated 09/11/2025, filed with the office on 12/10/2025, has been entered and made of record. Response to Amendment In light of Applicant’s amendment of the claims, the claim rejections under 35 U.S.C. 112(b) with respect to claim 5 is withdrawn. 5. In light of Applicant’s amendment of the claims, the claim objection with respect to claims 2 and 3 are withdrawn. Status of Claims Claims 1-14 are pending. Claims 1-3 and 5 are amended. Response to Arguments Applicant’s amendment of independent Claims 1 and 14, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot. Consequently, THIS ACTION IS MADE FINAL. Claim Objections 3. Claims 4-12 are objected to because of the following informalities: The claims should include a comma (,) after the preamble. For example, claim 4 recites “The method of claim 3 wherein the subset…” wherein it should read “The method of claim 3, wherein the subset…”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6 and 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. (US 2020/0160124 A1) in view of Li et al. (US 2019/0156144 A1) and in further view of Liao et al. (US 2021/0201071 A1). Regarding claim 1, Fu teaches, A method comprising: extracting (Fu, ¶0113: “method in the subject matter described herein, comprising: extracting”) key local landmarks from an input image; (Fu, ¶0003: “the first attention region including a discriminative portion of an object in the image; extracting a first local feature of the first attention region”) the key local landmarks representing locations (Fu, ¶0078: “location of the center point of the attention region may be localized by determining a region having a highest response value”) on the original image; (Fu, ¶0032: “the region extraction section 210 directly takes the attention region 201 in the image 170”) (Fu, ¶0034: “a CNN network including one or more convolutional layers, activation layers, and/or pooling layers 222-1 through 222-N for extracting the feature maps”) from the locations of the mapped key local landmarks on the feature map; (Fu, ¶0078: “initial location of the center point of the attention region may be localized by determining a region having a highest response value from the feature map output by the respective feature extraction sub-network”) and combining all or some of the local feature representations (Fu, ¶0037: “FC layer 217 is concatenated to the output of the FC layer 227”) with global feature representations produced by the deep CNN model (Fu, ¶0037: “FC layer 217 for comprehensive processing of the global feature 213”) to create combined feature representations. (Fu, ¶0046: “the local feature 333 may be combined with the global feature 213”). However, Fu does not explicitly teach, mapping locations of the key local landmarks to a feature map of an intermediate convolutional layer of a deep CNN model at their corresponding image pixel locations from the input image. In an analogous field of endeavor, Li teaches, mapping locations of the key local landmarks (Li, ¶0111: “The frame fusion detection data of at least one point in the fusion feature map may include, but is not limited to, for example, coordinate data, location and size data”) to a feature map of an intermediate convolutional layer of a deep CNN model (Li, ¶0028: “predict to obtain a plurality of fusion feature maps from a to-be-processed image through a deep convolutional neural network for target area frame detection”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu using the teachings of Li to introduce a feature map of an intermediate layer. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of a hierarchical model that may be used for increasingly complex features. Therefore, it would have been obvious to combine the analogous arts Fu and Li to obtain the above-described limitations in claim 1. However, the combination of Fu and Li does not explicitly teach, at their corresponding image pixel locations from the input image. In another analogous field of endeavor, Liao teaches, at their corresponding image pixel locations from the input image. (Liao, ¶0053: “the pixel locations in the two feature maps have one-to-one correspondence for the locations in the images 102 and 104”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu, in view of Li, using the teachings of Liao to introduce one to one correspondence mapping. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of improving the processing through fine grained geometrical alignment. Therefore, it would have been obvious to combine the analogous arts Fu, Li and Liao to obtain the invention in claim 1. Regarding claim 2, Fu in view of Li and in further view of Liao teaches, The method of claim 1, further comprising: sending the combined feature representations to a classifier to be used to classify objects in the input image. (Fu, ¶0037: “FC layer 217 is concatenated to the output of the FC layer 227… input to a further Softmax layer… Softmax function for the classification task”). Regarding claim 3, Fu in view of Li and in further view of Liao teaches, The method of claim 1, further comprising: selecting a subset of the local feature representations (Fu, ¶0084: “second attention region is comprised in the first attention region and comprises a discriminative sub-portion of the object in the image”) to be combined with the global feature representations. (Fu, ¶0046: “the local feature 333 may be combined with the global feature 213 and/or local feature 223 to determine the category of the object in the image 170”). Regarding claim 4, Fu in view of Li and in further view of Liao teaches, The method of claim 3 wherein the subset of local feature representations is selected based on a weighting scheme (Li, ¶0111: “each point in the fusion feature map may have one, three, six, or nine coordinate data corresponding to the object detection frame, and the confidence data of the coordinate data”) wherein a predetermined number of higher-weighted local feature representations are selected. (Li, ¶0115: “if the confidence of certain frame coordinate data of a certain point is greater than a predetermined threshold (e.g., 60%, 70%), an area frame corresponding to the frame coordinate data is determined at one of the target area frame data”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu, in view of Li, in further view of Liao, using the additional teachings of Li to introduce a confidence data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of only selecting the feature representations with higher than a threshold confidence. Therefore, it would have been obvious to combine the analogous arts Fu, Li and Liao to obtain the invention in claim 4. Regarding claim 5, Fu in view of Li and in further view of Liao teaches, The method of claim 4 wherein the weighting scheme is a learned weighting scheme. (Fu, ¶0024: “accurate discriminative portion localization can promote learning fine-grained features, which in turn can further help to accurately localize the discriminative portions”). Regarding claim 6, Fu in view of Li and in further view of Liao teaches, The method of claim 5 wherein the learned weighting scheme assigns weights depending on the ability of the local feature representations to discriminate (Li, ¶0111: “The prediction accuracy information may be confidence data of the frame fusion detection data, such as prediction accurate probability”) between objects in the input image belonging to different subclasses. (Fu, ¶0021: “for different species of birds, the differences may lie in the colors and/or patterns of their necks, backs or tails, the shapes and/or colors of their beaks or claws, or the like. Such a portion that is applicable to determine a specific category of an object may be referred to as a discriminative portion of the object”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu, in view of Li, in further view of Liao, using the additional teachings of Li to introduce assigning confidence based on accuracy of detection. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of only selecting the feature representations with higher detection accuracy. Therefore, it would have been obvious to combine the analogous arts Fu, Li and Liao to obtain the invention in claim 6. Regarding claim 9, Fu in view of Li and in further view of Liao teaches, The method of claim 3 when the subset of local feature representations are selected based on explicit knowledge of a domain of objects depicted in the input image. (Fu, ¶0052: “if the learning network structure in FIG. 2 or 3 is to be trained as being capable of recognizing a plurality of species of birds, the training images may include images of different species of birds”). Regarding claim 10, Fu in view of Li and in further view of Liao teaches, The method of claim 1 wherein the local feature representations are combined with the global feature representations by concatenation. (Fu, ¶0037: “FC layer 217 is concatenated to the output of the FC layer 227”). Regarding claim 11, Fu in view of Li and in further view of Liao teaches, The method of claim 1 wherein the key landmarks in the input image are mapped to the feature map after the third convolutional layer of the deep CNN model. (Li, ¶0012: “convoluting the current fusion feature maps through the fourth convolutional layers to obtain the adjusted fusion feature maps”). The proposed combination as well as the motivation for combining Fu and Li references presented in the rejection of claim 1, apply to claim 11 and are incorporated herein by reference. Thus, the method recited in claim 11 is met by Fu and Li. Regarding claim 12, Fu in view of Li and in further view of Liao teaches, The method of claim 1 wherein extracting key local landmarks from an input image comprises exposing the input image to a CNN model trained with a dataset comprising images with annotated landmarks. (Fu, ¶0022: “determining specific regions in the image by known bounding boxes or region annotations in a supervised fashion; then, extracting features from each of the regions and recognizing a specific category of the object based on these features”). Regarding claim 13, Fu in view of Li and in further view of Liao teaches, A system comprising: a processor; and memory, storing software that, when executed by the processor, performs the method of claim 1. (Fu, ¶0105: “The device comprises: a processing unit; and a memory coupled to the processing unit and having instructions stored thereon. The instructions, when executed by the processing unit, cause the device to perform acts”). Regarding claim 14, Fu in view of Li and in further view of Liao teaches, A system comprising: a processor; and memory, storing software that, when executed by the processor, performs the method of claim 4. (Fu, ¶0105: “The device comprises: a processing unit; and a memory coupled to the processing unit and having instructions stored thereon. The instructions, when executed by the processing unit, cause the device to perform acts”). Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. (US 2020/0160124 A1) in view of Li et al. (US 2019/0156144 A1) in further view of Liao et al. (US 2021/0201071 A1) and still in further view of Iwamoto et al. (US 2014/0328543 A1). Regarding claim 7, Fu in view of Li and in further view of Liao teaches, The method of claim 4. However, the combination of Fu, Li and Liao does not explicitly teach, wherein the predetermined number is a learned number. In an analogous field of endeavor, Iwamoto teaches, the predetermined number (Iwamoto, ¶0052: “selecting a specified number of feature points in a descending order of scale based on feature point information”) is a learned number. (Iwamoto, ¶0051: “selecting unit 12 can calculate a specified number by dividing the total size by a size of a local feature descriptor at one feature point”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu, in view of Li, and in further view of Liao, using the teachings of Iwamoto, to introduce a specified number of feature points. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of optimizing the number of feature points selected for feature representation. Therefore, it would have been obvious to combine the analogous arts Fu, Li, Liao and Iwamoto to obtain the invention in claim 7. Regarding claim 8, Fu in view of Li, in further view of Liao and still in further view of Iwamoto teaches, The method of claim 7 wherein the predetermined number is learned based on an optimal number of local feature presentations needed (Iwamoto, ¶0100: “the selection number determining unit 50 may be configured so as to determine the number of feature points and the number of dimensions so that at least one of the number of feature points and the number of dimensions is reduced”) to discriminate between sub-classes. (Fu, ¶0021: “for different species of birds, the differences may lie in the colors and/or patterns of their necks, backs or tails, the shapes and/or colors of their beaks or claws, or the like. Such a portion that is applicable to determine a specific category of an object may be referred to as a discriminative portion of the object”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Fu, in view of Li, in further view of Liao and still in further view of Iwamoto using the additional teachings of Iwamoto, to introduce selecting an optimal number of feature points. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of increasing the speed of operation. Therefore, it would have been obvious to combine the analogous arts Fu, Li, Liao and Iwamoto to obtain the invention in claim 8. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAZUL ISLAM/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Jan 16, 2026
Final Rejection — §103
Apr 07, 2026
Request for Continued Examination
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602808
METHOD FOR INSPECTING AN OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12592075
REMOTE SENSING FOR INTELLIGENT VEGETATION TRIM PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12579695
Method of Generating Target Image Data, Electrical Device and Non-Transitory Computer Readable Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12524900
METHOD FOR IMPROVING ESTIMATION OF LEAF AREA INDEX IN EARLY GROWTH STAGE OF WHEAT BASED ON RED-EDGE BAND OF SENTINEL-2 SATELLITE IMAGE
2y 5m to grant Granted Jan 13, 2026
Patent 12489964
PATH PLANNING
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
86%
With Interview (+28.3%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month