Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,558

Damage Detection and Image Alignment Based on Polygonal Representation of Objects

Final Rejection §103
Filed
Aug 02, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Spark Insights Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Applicant’s Remarks filed 11/20/2025 have been received and considered. The 101 rejection in the non-final office action mailed 07/22/2025 is hereby withdrawn. Claims 1, 4, and 13 have been amended. Claims 2 – 3 and 6 have been cancelled. Claims 1, 4 – 5, 7 – 13, and 22 – 28, all of the remaining claims pending in this application, have been rejected. Response to Applicant’s Remarks Applicant’s remarks were filed 07/22/2025 regarding amendments to independent claim 1. Applicant’s remarks starting on Page 8 argue that due to claims 2 – 3 (which were rejected under 102(a)(1)) and claim 6 (which was rejected under 103) being cancelled with the claim language being added to claim 1, claim 1 is not anticipated by the cited references. Applicant also argues that the Examiner failed to address the claim limitation of claim 22 which stated “aligning the first image and the second image based on the obtained markers for the first image and the second image” when rejecting claim 22 as applied to “the above claims”. The Examiner disagrees with the remarks made by the Applicant. Pertaining to newly amended claim 1, Examiner notes that moving the claim language from a claim rejected under 103 rejection into a claim with a 102 rejection has resulted in largely the same rational for rejecting the amended claim.. Pertaining to claim 22, the rejection of claim 1 included citing figure 2 (Tilon) that specifically shows pre and post image of a catastrophic event in which severe damage had occurred to various structures. It is clearly visible in Figure 2 that both images are aligned, to include the same angle, elevation, and building footprints (i.e. markers). Furthermore, claims 7 and 8 had claim language including limitations with regards to the first and second image alignment, which was also rejected. Examiner therefore maintains that the combined prior art referenced in the non-final mailed 07/22/2025 do indeed teach the features of the claims, as detailed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4 – 5, and 10 – 13 are rejected under 35 U.S.C. 103 as being unpatentable over "Post-Disaster Building Damage Detection from Earth Observation Imagery Using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks" by Tilon et al. (hereinafter Tilon) in view of "Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks" by Xu et al. (hereinafter Xu). Claim 1 Regarding Claim 1, Tilon teaches a method for detecting damage in a geographical area (Abstract), the method comprising: receiving a first image of the geographical area, captured after occurrence of a damage-causing event in the geographical area (Figure 2; "We made use of the xBD satellite imagery dataset [49]. It was created with the aim of aiding the development of post-disaster damage and change detection models. It consists of 162.787 pre- and post-event RGB satellite images from a variety of disaster events around the globe. These include floods, (wild)fire, hurricane, earthquake, volcano and tsunami.", Section 3: Materials and Methods - 3.2.1 xBD Dataset); PNG media_image1.png 339 674 media_image1.png Greyscale obtaining a second image of the geographical area, the second image including image data of the geographical area prior to the occurrence of the damage-causing event in the geographical area, wherein the first image and the second image contain an overlapping portion comprising one or more common objects (Rejected as applied directly above - refer to figure 2); obtaining markers for the first image and for the second image, wherein the markers are geometrical shapes corresponding to objects in the first image and in the second image (Rejected as applied directly above - refer to figure 2), where the geometrical shapes are the building footprints; and determining damage suffered by an object, from the one or more common objects, based on differences between a first geometrical shape corresponding to the object appearing in the first image and a second geometrical shape corresponding to the object appearing in the second image (Figure 2 "Post-disaster"), where the color of the geometrical shapes indicate the severity of the inflicted damage. wherein obtaining the markers for the first image and for the second image comprises: deriving outlines for at least the one or more common objects appearing in the first image and in the second image (Rejected as applied directly above). wherein deriving the outlines for the at least the one or more common objects comprises deriving the outlines based on one or more of: a learning model to determine the outlines for the at least the one or more common objects, or filtering-based processing to determine the outlines for the at least the one or more common objects ("Open source repositories such as OpenStreetMap provide costless building footprints for an increasing number of regions, and supervised or unsupervised deep learning are proficient in extracting building footprints from satellite imagery [52,53,54]. Therefore, the proposed cropping strategy and subsequent training can be completely unsupervised and automated.", Section 3: Materials and Methods - 3.3 Data Pre-Processing and Selection). Tilon does not teach wherein determining the damage suffered by the object comprises computing one or more of: difference in a first area enclosed by a first outline of the object in the first image and a second area enclosed by a second outline of the object in the second image; or differences between properties of a first set of line segments of the first outline of the object in the first image and properties of a second set of line segments of the second outline of the object in the second image. However, Xu teaches difference in a first area enclosed by a first outline of the object in the first image and a second area enclosed by a second outline of the object in the second image; or differences between properties of a first set of line segments of the first outline of the object in the first image and properties of a second set of line segments of the second outline of the object in the second image ("Experiment results (Table 2) shows that twin-tower models outperform single tower models, and the TTS model achieves the best performance with 0.8302 validation AUC. The better performance of the twin-tower models indicates that useful information can be extracted by comparing buildings and their surroundings in the post-disaster images against those in the pre-disaster images.", Section 2: Data Generation Pipeline). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tilon, to incorporate using a learning model to determine a difference between the same building or area footprints of pre and post damage images, as taught by Xu. The motivation for doing so would have been to use the outlines as the area footprint between the two images in order to gage a severity of damage from an incident that will have a higher level of accuracy when added by machine learning. Claim 4 Regarding Claim 4, dependent on claim 1, Tilon, in view of Xu, teaches the invention as claimed in claim 1. Tilon further teaches wherein determining the damage suffered by the object comprises determining the damage suffered by the object based on a learning model to determine damage, the learning model to determine the damage being independent of the learning model to determine the outlines for the at least the one or more common objects (Graphical Abstract; Figure 11). PNG media_image2.png 326 685 media_image2.png Greyscale PNG media_image3.png 265 702 media_image3.png Greyscale Claim 5 Regarding Claim 5, dependent on claim 4, Tilon, in view of Xu, teaches the invention as claimed in claim 4. Tilon does not teach wherein the learning model to determine the outlines and the learning model to determine damage are implemented using one or more neural networks learning engines. However, Xu teaches wherein the learning model to determine the outlines and the learning model to determine damage are implemented using one or more neural networks learning engines ("To generate examples of undamaged buildings, we first used a building detection ML model to identify all buildings in the damage assessment area, and then filtered out all buildings that were marked by UNOSAT analysts as damaged.", Section 2: Data Generation Pipeline - Identify Undamaged Buildings…"Same as TTC, except combine the extracted feature values by subtracting them element-wise instead of concatenating them. This architecture is designed to more directly capture the differences in the pre- and post-disaster images, which is a good indicator of building damage.", Section 2: Data Generation Pipeline - Twin-tower Substract (TTS) Model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tilon, to incorporate both a learning model to determine outlines and a learning model to determine damage having one or more neural networks, as taught by Xu. The motivation for doing so would have been increase the accuracy of the models due their networks having specific detection parameters and being able to perform their respective detections at a much faster rate. Claim 10 Regarding Claim 10, dependent on claim 1, Tilon, in view of Xu, teaches the invention as claimed in claim 1. Tilon further teaches wherein the geometrical shapes comprise one or more of: points, lines, circles, or polygons (Rejected as applied to claim 1). Claim 11 Regarding Claim 11, dependent on independent claim 1, Tilon, in view of Xu, teaches the invention as claimed in claim 1. Tilon further teaches selecting the second image from a repository of baseline images for different geographical areas based on information identifying the geographical area associated with the second image (Rejected as applied to claim 1). Claim 12 Regarding Claim 12, dependent on independent claim 1, Tilon, in view of Xu, teaches the invention as claimed in claim 1. Tilon further teaches wherein the first image and the second image of the geographical area include at least one of: an aerial photo of the geographical area captured by image-capture device on one or more of a satellite vehicle, or a low- flying aerial vehicle, or a digital surface model (DSM) image (Rejected as applied to claim 1 - refer to figure 2 "satellite images"). Claim 13, an independent system claim, is rejected for the same reasons as applied to claim 1. Claims 7 – 9 and 22 – 28 are rejected under 35 U.S.C. 103 as being unpatentable over "Post-Disaster Building Damage Detection from Earth Observation Imagery Using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks" by Tilon et al. (hereinafter Tilon) in view of "Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks" by Xu et al. (hereinafter Xu) in further view of US Publication No. 2014/0064554 A1 to Coulter et al. (hereinafter Coulter). Claim 7 Regarding Claim 7, dependent on claim 1, Tilon¸in view of Xu, teaches the invention as claimed in claim 1. Tilon further teaches b) a second alignment procedure comprising: deriving outlines for the at least the one or more common objects in the first image and in the second image (Figure 2). Examiner notes that while only one of the three methods needs to be rejected due to the claim language stating “one or more of”, Examiner cites Coulter to be used in combination with Tilon in view of Xu to further reject methods a) and c). Tilon does not (explicitly teach, but could be implied) aligning the first image and the second image according to one or more of: a) a first alignment procedure comprising: aligning the first image and the second image according to geo-referencing information associated with the first image and the second image; or c) a third alignment procedure comprising: aligning the first image and the second image based on image perspective information associated with the first image and the second image, the image perspective information determined according to measurement data from one or more inertial navigation sensors associated with image-capture devices to capture the first image and the second image. Nor does Xu or the combination. However, Coulter teaches aligning the first image and the second image according to one or more of: a) a first alignment procedure comprising: aligning the first image and the second image according to geo-referencing information associated with the first image and the second image (Figure 3C; "Using the station matching approach, the images are aligned (co-registered) first, and then further processing (e.g., geo-referencing) may be applied if desired. For a large number of applications, only crude absolute positioning is required, which means that after images are spatially co-registered, only a certain level of positional accuracy needs to be achieved. In some cases, information about position and attitude calculated by sensors on-board an aircraft (e.g., using global positioning systems and inertial measurement units) is sufficient to crudely position the imagery, which enables automated georeferencing (direct georeferencing).", Paragraph [0068]); PNG media_image4.png 457 571 media_image4.png Greyscale c) a third alignment procedure comprising: aligning the first image and the second image based on image perspective information associated with the first image and the second image, the image perspective information determined according to measurement data from one or more inertial navigation sensors associated with image-capture devices to capture the first image and the second image (Figure 3C; "Using the station matching approach, the images are aligned (co-registered) first, and then further processing (e.g., geo-referencing) may be applied if desired. For a large number of applications, only crude absolute positioning is required, which means that after images are spatially co-registered, only a certain level of positional accuracy needs to be achieved. In some cases, information about position and attitude calculated by sensors on-board an aircraft (e.g., using global positioning systems and inertial measurement units) is sufficient to crudely position the imagery, which enables automated georeferencing (direct georeferencing).", Paragraph [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Tilon, in view of Xu, to incorporate the alignment of the images by geo-referencing or by using the inertial navigation sensors associated with the capture device, as taught by Coulter. The motivation for doing so would have been to use the alignment of multiple images captured and potentially creating a mosaic of the images that cover a larger viewable area to see the extent of damage inflicted. Claim 8 Regarding Claim 8, dependent on claim 7, Tilon, in view of Xu and Coulter, teaches the invention as claimed in claim 7. Tilon further teaches wherein the second alignment procedure further comprises: excluding at least one of the derived outlines determined to correspond to a respective at least one object, from the one or more common objects in the first image and in the second image, that was damaged during the occurrence of the damage-causing event ("For smaller patch sizes, as explained in Section 3.3, the assumption was made that the smaller the patch size, the more adept the ADGAN would be in learning the image distribution of the building characteristics, instead of its surroundings. For the example in Figure 9, this seemed to hold true. In the patches of size 64 × 64 and 32 × 32, high anomaly scores were found all throughout the image, including the damaged building itself. This suggested that our assumption was correct. In short, the large-scale damage pattern of wildfire, plus the removal of vegetation, resulted in a high performing model.", Section 4: Results - 4.3 The Importance of Context); wherein aligning the first image and the second image comprises aligning the first image and the second image based on a set of outlines, selected from the derived outlines for the at least the one or more common objects, excluding the at least one of the derived outlines (Figure 9), where they have aligned the buildings in the pre and post wildfire images, while removing the vegetation which was causing lower anomaly scores. PNG media_image5.png 344 597 media_image5.png Greyscale Claim 9 Regarding Claim 9, dependent on claim 7, Tilon, in view of Xu and Coulter, teaches the invention as claimed in claim 7. Tilon further teaches wherein the image perspective information includes respective nadir angle information for the first image and the second image (Figure 2), where the 2 images were captured at the Nadir angle (90 degrees). Examiner notes that Coulter also teaches wherein the image perspective information includes respective nadir angle information for the first image and the second image ("In some embodiments described herein, multi-temporal images may be aligned automatically using routines to find matching control points (features common between two images) and applying existing simple warping functions (such as projective or second-order polynomial). This approach can include matching the imaging sensor position and viewing angles regardless of the platform or sensor type/orientation.", Paragraph [0063]…"A complete software system for collecting nadir and oblique station matched images, pairing matched images, co-registering matched images, and visualizing changes between multi-temporal image sets may be run to implement the aforementioned methods of spatial co-registration. Such software system may be implemented as modules and executed by a processor. Exploitation of remote sensing imagery can be used for synoptic wide area monitoring and change detection. When multi-temporal remotely sensed images (e.g., airborne or satellite) are precisely aligned, image sets may be used to detect land cover or feature changes of interest for a wide variety of purposes (e.g., natural or anthropogenic damage, personnel or equipment movements, etc.).", Paragraph [0081]). Claim 22, an independent claim, is rejected for the same reasons as applied to the above claims. Claims 23 – 28 are also rejected for the same reasons as applied to the above claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Jul 17, 2025
Non-Final Rejection — §103
Nov 20, 2025
Response Filed
Mar 03, 2026
Final Rejection — §103
Apr 03, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month