Prosecution Insights
Last updated: April 19, 2026
Application No. 18/015,886

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Final Rejection §103
Filed
Jan 12, 2023
Examiner
BONANSINGA, AARON TIMOTHY
Art Unit
2673
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments (see remarks), filed 08/12/2025, with respect to the claim 1-13 have been fully considered but are not persuasive. The applicant argues on page 1, “Chow fails to teach to deform object presence areas in two object presence images to generate two deformed images.” In response, the office does not find this argument to be persuasive. Based on the breadth of the claim language the prior art by CHOW et al. (Patent No.: US 10, 032, 077 B1), explicitly teaches deform object presence areas in two object presence images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [04], Line [29-36]-CHOW discloses the radar image processing system 118 comprises a SAR imaging module 202 that generates SAR images 204-206 of the scene 107 based upon the SAR data 108. These SAR images 204-206 comprise views of the same scene 107 taken at different times), in which one or more objects are present (Fig. 1. Column [03], Line [60-65]-CHOW discloses the radar image processing system 118 identifies vehicle tracks in the CCD image), obtained from each of two observed images to generate two deformed images (Fig. 3. Column [04], Line [37-53]-CHOW discloses the CCD module 208 processes a first SAR image 204 and a second SAR image 205 to generate a first CCD image 210, and processes a third SAR image 206 and the second SAR image 205 to generate a second CCD image 211, where the second CCD image 211 is temporally disjoint from the first CCD image 210 (wherein the third image is generated through joint pre-processing of the first and second temporally disjoint images through techniques such as principal component analysis (PCA), independent component analysis (ICA), computation of the normalized coherence product (NCP)). Additionally, Column [05], Line [52-67]-CHOW discloses the track identification module 216 comprises a segmentation module 302 that segments the input CCD image 214 into a plurality of CCD image chips 304 (wherein the chips may be of equal size or different size and/or spatial resolution)), based on a size of the object appearing in each of the two observed images (Fig. 7. Column [09], Line [22-41]-CHOW discloses the selected size of the chips will depend on the resolution of the initial CCD image, and is selected to mitigate the generation of artifacts in later stages of the methodology, while capturing large enough portions of the original CCD image to be able to identify features of interest. Inverse Radon transforms of each of the Radon transforms of the CCD image chips are calculated. At 712, and morphological erosion may be applied in order to reduce line artifacts resulting from the inverse Radon transform process (wherein the office considers morphological operations are a form of image deformation). Please also read Column [08], Line [18-43] and Column [09], Line [01-20]). The applicant argues on page 2, “Chow fails to teach determining difference of the object between the two images.” and on page 3, “Chow fails to teach to generate an image capable of identifying the determined difference.” In response, the office does not find this argument to be persuasive. Based on the breadth of the claim language the prior art by CHOW et al. (Patent No.: US 10, 032, 077 B1), explicitly teaches generate a synthesized image (Fig. 3, #214 called CCD image. Column [05], Line [52-67]) by synthesizing the two deformed images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [07], Line [18-24]-CHOW discloses the image reconstruction module 310 then receives the plurality of Radon transforms 308 and performs an inverse Radon transform to each of the Radon transforms 308, generating new image chips and stitching them together to construct a final track detection image of the scene 107 depicted in the original SAR images 204-206, with identified tracks indicated in the track detection image. Please also see Fig. 4 and read Column [04], Line [37-53]), determine difference of the object between the two object presence images (Fig. 7. Column [05], Line [24-51]-CHOW discloses the track identification module 216, upon identifying vehicle tracks in the CCD image 214 (and thus the scene 107), assigns a classification to the tracks (e.g., according to various characteristics) and signifies this classification in the graphical indication 1. The classification can indicate that the vehicle tracks are of a first width (e.g., rather than a second width). In another example, the classification of vehicle tracks can indicate that the vehicle tracks are of a second width (e.g., rather than the first width). Please also read Column [04], Line [54-68] and claims 5 and 10-11), and generate an image capable of identifying the determined difference (Fig. 7. 26 Column [07], Line [18-47]-CHOW discloses the track identification module 216 can signify this classification by assigning a first color to tracks having the first width and assigning a second color to tracks having the second width in the graphical indication 120 (e.g., where the graphical indication 120 is an image of the scene 107). An analyst can then view the graphical indication 120 and see (at a glance) which tracks were made by, for example, a passenger vehicle with the first track width and which were made by a commercial or cargo vehicle with the second track width). The applicant argues on page 3, “Accordingly, the pending claims are patentable.” In response, the office does not find this argument to be persuasive for the reasons stated above and below. The office respectfully encourages the applicant to amend the claims to overcome the prior arts on record. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over CHOW (US 10032077 B1), hereinafter referenced as CHOW in view of HU et al. (WO 2020/233591 A1 Corresponding to US 20210011149 A1), hereinafter referenced as HU. Regarding claim 1, CHOW explicitly teaches an image processing device (Fig. 1, #100 called a SAR system. Column [03], Line [53-55]. Further at Column [03], Line [04-06]-CHOW discloses the disclosure is directed to systems and methods for identifying vehicle tracks in synthetic aperture radar (SAR) coherent change detection (CCD) images) comprising: a memory storing software instructions (Fig. 8, #804 called memory. Column [03], Line [53-55]), and one or more processors (Fig. 8, #802 called processors. Column [03], Line [53-55]) configured to execute the software instructions to deform object presence areas in two object presence images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [04], Line [29-36]-CHOW discloses the radar image processing system 118 comprises a SAR imaging module 202 that generates SAR images 204-206 of the scene 107 based upon the SAR data 108. These SAR images 204-206 comprise views of the same scene 107 taken at different times), in which one or more objects are present (Fig. 1. Column [03], Line [60-65]-CHOW discloses the radar image processing system 118 identifies vehicle tracks in the CCD image), obtained from each of two observed images to generate two deformed images (Fig. 3. Column [04], Line [37-53]-CHOW discloses the CCD module 208 processes a first SAR image 204 and a second SAR image 205 to generate a first CCD image 210, and processes a third SAR image 206 and the second SAR image 205 to generate a second CCD image 211, where the second CCD image 211 is temporally disjoint from the first CCD image 210 (wherein the third image is generated through joint pre-processing of the first and second temporally disjoint images through techniques such as principal component analysis (PCA), independent component analysis (ICA), computation of the normalized coherence product (NCP)). Additionally, Column [05], Line [52-67]-CHOW discloses the track identification module 216 comprises a segmentation module 302 that segments the input CCD image 214 into a plurality of CCD image chips 304 (wherein the chips may be of equal size or different size and/or spatial resolution)), based on a size of the object appearing in each of the two observed images (Fig. 7. Column [09], Line [22-41]-CHOW discloses the selected size of the chips will depend on the resolution of the initial CCD image, and is selected to mitigate the generation of artifacts in later stages of the methodology, while capturing large enough portions of the original CCD image to be able to identify features of interest. Inverse Radon transforms of each of the Radon transforms of the CCD image chips are calculated. At 712, and morphological erosion may be applied in order to reduce line artifacts resulting from the inverse Radon transform process (wherein the office considers morphological operations are a form of image deformation). Please also read Column [08], Line [18-43] and Column [09], Line [01-20]), and generate a synthesized image (Fig. 3, #214 called CCD image. Column [05], Line [52-67]) by synthesizing the two deformed images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [07], Line [18-24]-CHOW discloses the image reconstruction module 310 then receives the plurality of Radon transforms 308 and performs an inverse Radon transform to each of the Radon transforms 308, generating new image chips and stitching them together to construct a final track detection image of the scene 107 depicted in the original SAR images 204-206, with identified tracks indicated in the track detection image. Please also see Fig. 4 and read Column [04], Line [37-53]), determine difference of the object between the two object presence images (Fig. 7. 20 Column [05], Line [24-51]-CHOW discloses the track identification module 216, upon identifying vehicle tracks in the CCD image 214 (and thus the scene 107), assigns a classification to the tracks (e.g., according to various characteristics) and signifies this classification in the graphical indication 1. The classification can indicate that the vehicle tracks are of a first width (e.g., rather than a second width). In another example, the classification of vehicle tracks can indicate that the vehicle tracks are of a second width (e.g., rather than the first width). Please also read Column [04], Line [54-68]), and generate an image capable of identifying the determined difference (Fig. 7. 26 Column [07], Line [18-47]-CHOW discloses the track identification module 216 can signify this classification by assigning a first color to tracks having the first width and assigning a second color to tracks having the second width in the graphical indication 120 (e.g., where the graphical indication 120 is an image of the scene 107). An analyst can then view the graphical indication 120 and see (at a glance) which tracks were made by, for example, a passenger vehicle with the first track width and which were made by a commercial or cargo vehicle with the second track width). Chow fails to explicitly teach generate two deformed images, based on an observation angle of each of the two observed images. However, HU explicitly teaches generate two deformed images, based on an observation angle of each of the two observed images (Fig. 1. Paragraph [0003]-HU discloses InSAR technology processes two SAR images of the same area at different times to obtain a one-dimensional average deformation result. GNSS technology uses a ground receiver to obtain a time-continuous three-dimensional coordinate sequence. Further in paragraph [0085]-HU discloses the prior variances of InSAR and GNSS observations are used to determine the weights, and the three-dimensional surface deformation is solved with the least square method. Additionally in paragraph [0084]-HU discloses the simulation data description includes (1) simulating the three-dimensional deformation field in east-west, north-south and vertical directions in a certain area (image size 400×450) (as shown in FIG. 2, (a)-(c)); (2) combining imaging geometry of sentinel-1A/B satellite data to calculate the ascending and descending InSAR deformation results, wherein the incident angle and the azimuth angle of the ascending orbit data are 39.3° and −12.2°, respectively; the incident angle and the azimuth angle of the descending orbit data are 33.9° and −167.8°, respectively” (wherein α.sub.i.sup.k, θ.sub.i.sup.k are respectively an azimuth angle and an incident angle of a satellite when acquiring the SAR data). Please also read paragraph [0057 and 0059]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW of having an image processing device comprising: a memory storing software instructions, and one or more processors configured to execute the software instructions to deform object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of HU of having generate two deformed images, based on an observation angle of each of the two observed images. Wherein having CHOW’s image processing device having generate two deformed images, based on an observation angle of each of the two observed images. The motivation behind the modification would have been to obtain an image processing method that improves the robustness and accuracy of data, since both CHOW and HU system analyze SAR data. Wherein CHOW’s system improves the robustness of data, while HU provides a weighting algorithm for fusing InSAR and GNSS that increases the accuracy and spatial resolution of data used to monitor three-dimensional surface deformation. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and HU et al. (WO 2020233591 A1 Corresponding to US 20210011149 A1), Paragraph [0003-0004]. Regarding claim 6, CHOW explicitly teaches an image processing method (Fig. 1, #100 called a SAR system. Column [03], Line [53-55]. Further at Column [03], Line [04-06]-CHOW discloses the disclosure is directed to systems and methods for identifying vehicle tracks in synthetic aperture radar (SAR) coherent change detection (CCD) images), implemented by a processor (Fig. 8, #802 called processors. Column [03], Line [53-55]), comprising: deforming object presence areas in two object presence images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Para 17 Column [00, Line [00]-CHOW discloses the radar image processing system 118 comprises a SAR imaging module 202 that generates SAR images 204-206 of the scene 107 based upon the SAR data 108. These SAR images 204-206 comprise views of the same scene 107 taken at different times), in which one or more objects are present (Fig. 1. Column [03], Line [60-65]-CHOW discloses the radar image processing system 118 identifies vehicle tracks in the CCD image), obtained from each of two observed images to generate two deformed images (Fig. 3. Column [04], Line [37-53]-CHOW discloses the CCD module 208 processes a first SAR image 204 and a second SAR image 205 to generate a first CCD image 210, and processes a third SAR image 206 and the second SAR image 205 to generate a second CCD image 211, where the second CCD image 211 is temporally disjoint from the first CCD image 210 (wherein the third image is generated through joint pre-processing of the first and second temporally disjoint images through techniques such as principal component analysis (PCA), independent component analysis (ICA), computation of the normalized coherence product (NCP)). Additionally, Column [05], Line [52-67]-CHOW discloses the track identification module 216 comprises a segmentation module 302 that segments the input CCD image 214 into a plurality of CCD image chips 304 (wherein the chips may be of equal size or different size and/or spatial resolution)), based on a size of the object appearing in each of the two observed images (Fig. 7. Column [09], Line [22-41]-CHOW discloses the selected size of the chips will depend on the resolution of the initial CCD image, and is selected to mitigate the generation of artifacts in later stages of the methodology, while capturing large enough portions of the original CCD image to be able to identify features of interest. Inverse Radon transforms of each of the Radon transforms of the CCD image chips are calculated. At 712, and morphological erosion may be applied in order to reduce line artifacts resulting from the inverse Radon transform process (wherein the office considers morphological operations are a form of image deformation). Please also read Column [08], Line [18-43] and Column [09], Line [01-20]), and generating a synthesized image by synthesizing the two deformed images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [07], Line [18-24]-CHOW discloses the image reconstruction module 310 then receives the plurality of Radon transforms 308 and performs an inverse Radon transform to each of the Radon transforms 308, generating new image chips and stitching them together to construct a final track detection image of the scene 107 depicted in the original SAR images 204-206, with identified tracks indicated in the track detection image. Please also see Fig. 4 and read Column [04], Line [37-53]), determining difference of the object between the two object presence images (Fig. 7. 20 Column [05], Line [24-51]-CHOW discloses the track identification module 216, upon identifying vehicle tracks in the CCD image 214 (and thus the scene 107), assigns a classification to the tracks (e.g., according to various characteristics) and signifies this classification in the graphical indication 1. The classification can indicate that the vehicle tracks are of a first width (e.g., rather than a second width). In another example, the classification of vehicle tracks can indicate that the vehicle tracks are of a second width (e.g., rather than the first width). Please also read Column [04], Line [54-68]), and generating an image capable of identifying the determined difference (Fig. 7. 26 Column [07], Line [18-47]-CHOW discloses the track identification module 216 can signify this classification by assigning a first color to tracks having the first width and assigning a second color to tracks having the second width in the graphical indication 120 (e.g., where the graphical indication 120 is an image of the scene 107). An analyst can then view the graphical indication 120 and see (at a glance) which tracks were made by, for example, a passenger vehicle with the first track width and which were made by a commercial or cargo vehicle with the second track width). Chow fails to explicitly teach generate two deformed images, based on an observation angle of each of the two observed images. However, HU explicitly teaches generate two deformed images, based on an observation angle of each of the two observed images (Fig. 1. Paragraph [0003]-HU discloses InSAR technology processes two SAR images of the same area at different times to obtain a one-dimensional average deformation result. GNSS technology uses a ground receiver to obtain a time-continuous three-dimensional coordinate sequence. Further in paragraph [0085]-HU discloses the prior variances of InSAR and GNSS observations are used to determine the weights, and the three-dimensional surface deformation is solved with the least square method. Additionally in paragraph [0084]-HU discloses the simulation data description includes (1) simulating the three-dimensional deformation field in east-west, north-south and vertical directions in a certain area (image size 400×450) (as shown in FIG. 2, (a)-(c)); (2) combining imaging geometry of sentinel-1A/B satellite data to calculate the ascending and descending InSAR deformation results, wherein the incident angle and the azimuth angle of the ascending orbit data are 39.3° and −12.2°, respectively; the incident angle and the azimuth angle of the descending orbit data are 33.9° and −167.8°, respectively” (wherein α.sub.i.sup.k, θ.sub.i.sup.k are respectively an azimuth angle and an incident angle of a satellite when acquiring the SAR data). Please also read paragraph [0057 and 0059]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW of having an image processing method, implemented by a processor, comprising: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, based on a size of the object appearing in each of the two observed images, and generating a synthesized image by synthesizing the two deformed images, with the teachings of HU of having generate two deformed images, based on an observation angle of each of the two observed images. Wherein having CHOW’s an image processing method having generate two deformed images, based on an observation angle of each of the two observed images. The motivation behind the modification would have been to obtain an image processing method that improves the robustness and accuracy of data, since both CHOW and HU system analyze SAR data. Wherein CHOW’s system improves the robustness of data, while HU provides a weighting algorithm for fusing InSAR and GNSS that increases the accuracy and spatial resolution of data used to monitor three-dimensional surface deformation. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and HU et al. (WO 2020233591 A1 Corresponding to US 20210011149 A1), Paragraph [0003-0004]. Regarding claim 10, CHOW explicitly teaches a non-transitory computer readable recording medium (Fig. 8, #804 called memory. Column [03], Line [53-55]. Please also see Column [10], Line [39-65]) storing an image processing program (Fig. 1, #100 called a SAR system. Column [03], Line [53-55]. Further at Column [03], Line [04-06]-CHOW discloses the disclosure is directed to systems and methods for identifying vehicle tracks in synthetic aperture radar (SAR) coherent change detection (CCD) images) which, when executed by a processor (Fig. 8, #802 called processors. Column [03], Line [53-55]), performs: deforming object presence areas in two object presence images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [04], Line [29-36]-CHOW discloses the radar image processing system 118 comprises a SAR imaging module 202 that generates SAR images 204-206 of the scene 107 based upon the SAR data 108. These SAR images 204-206 comprise views of the same scene 107 taken at different times), in which one or more objects are present (Fig. 1. Column [03], Line [60-65]-CHOW discloses the radar image processing system 118 identifies vehicle tracks in the CCD image), obtained from each of two observed images to generate two deformed images (Fig. 3. Column [04], Line [37-53]-CHOW discloses the CCD module 208 processes a first SAR image 204 and a second SAR image 205 to generate a first CCD image 210, and processes a third SAR image 206 and the second SAR image 205 to generate a second CCD image 211, where the second CCD image 211 is temporally disjoint from the first CCD image 210 (wherein the third image is generated through joint pre-processing of the first and second temporally disjoint images through techniques such as principal component analysis (PCA), independent component analysis (ICA), computation of the normalized coherence product (NCP)). Additionally, Column [05], Line [52-67]-CHOW discloses the track identification module 216 comprises a segmentation module 302 that segments the input CCD image 214 into a plurality of CCD image chips 304 (wherein the chips may be of equal size or different size and/or spatial resolution)), based on a size of the object appearing in each of the two observed images (Fig. 7. Column [09], Line [22-41]-CHOW discloses the selected size of the chips will depend on the resolution of the initial CCD image, and is selected to mitigate the generation of artifacts in later stages of the methodology, while capturing large enough portions of the original CCD image to be able to identify features of interest. Inverse Radon transforms of each of the Radon transforms of the CCD image chips are calculated. At 712, and morphological erosion may be applied in order to reduce line artifacts resulting from the inverse Radon transform process (wherein the office considers morphological operations are a form of image deformation). Please also read Column [08], Line [18-43] and Column [09], Line [01-20]), and generating a synthesized image (Fig. 3, #214 called CCD image. Column [05], Line [52-67]) by synthesizing the two deformed images (Fig. 3, #204, #206, called CCD IMG. Column [04], Line [29-36]. Further at Column [07], Line [18-24]-CHOW discloses the image reconstruction module 310 then receives the plurality of Radon transforms 308 and performs an inverse Radon transform to each of the Radon transforms 308, generating new image chips and stitching them together to construct a final track detection image of the scene 107 depicted in the original SAR images 204-206, with identified tracks indicated in the track detection image. Please also see Fig. 4 and read Column [04], Line [37-53]), determining difference of the object between the two object presence images (Fig. 7. 20 Column [05], Line [24-51]-CHOW discloses the track identification module 216, upon identifying vehicle tracks in the CCD image 214 (and thus the scene 107), assigns a classification to the tracks (e.g., according to various characteristics) and signifies this classification in the graphical indication 1. The classification can indicate that the vehicle tracks are of a first width (e.g., rather than a second width). In another example, the classification of vehicle tracks can indicate that the vehicle tracks are of a second width (e.g., rather than the first width). Please also read Column [04], Line [54-68]), and generating an image capable of identifying the determined difference (Fig. 7. 26 Column [07], Line [18-47]-CHOW discloses the track identification module 216 can signify this classification by assigning a first color to tracks having the first width and assigning a second color to tracks having the second width in the graphical indication 120 (e.g., where the graphical indication 120 is an image of the scene 107). An analyst can then view the graphical indication 120 and see (at a glance) which tracks were made by, for example, a passenger vehicle with the first track width and which were made by a commercial or cargo vehicle with the second track width). Chow fails to explicitly teach generate two deformed images, based on an observation angle of each of the two observed images. However, HU explicitly teaches generate two deformed images, based on an observation angle of each of the two observed images (Fig. 1. Paragraph [0003]-HU discloses InSAR technology processes two SAR images of the same area at different times to obtain a one-dimensional average deformation result. GNSS technology uses a ground receiver to obtain a time-continuous three-dimensional coordinate sequence. Further in paragraph [0085]-HU discloses the prior variances of InSAR and GNSS observations are used to determine the weights, and the three-dimensional surface deformation is solved with the least square method. Additionally in paragraph [0084]-HU discloses the simulation data description includes (1) simulating the three-dimensional deformation field in east-west, north-south and vertical directions in a certain area (image size 400×450) (as shown in FIG. 2, (a)-(c)); (2) combining imaging geometry of sentinel-1A/B satellite data to calculate the ascending and descending InSAR deformation results, wherein the incident angle and the azimuth angle of the ascending orbit data are 39.3° and −12.2°, respectively; the incident angle and the azimuth angle of the descending orbit data are 33.9° and −167.8°, respectively” (wherein α.sub.i.sup.k, θ.sub.i.sup.k are respectively an azimuth angle and an incident angle of a satellite when acquiring the SAR data). Please also read paragraph [0057 and 0059]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW of having a non-transitory computer readable recording medium storing an image processing program which, when executed by a processor, performs: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of HU of having generate two deformed images, based on an observation angle of each of the two observed images. Wherein having CHOW’s image processing device having generate two deformed images, based on an observation angle of each of the two observed images. The motivation behind the modification would have been to obtain an image processing device that improves the robustness and accuracy of data, since both CHOW and HU system analyze SAR data. Wherein CHOW’s system improves the robustness of data, while HU provides a weighting algorithm for fusing InSAR and GNSS that increases the accuracy and spatial resolution of data used to monitor three-dimensional surface deformation. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and HU et al. (WO 2020233591 A1 Corresponding to US 20210011149 A1), Paragraph [0003-0004]. Claims 2, 7 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over CHOW (US 10032077 B1), hereinafter referenced as CHOW in view of HU et al. (US 20210011149 A1), hereinafter referenced as HU and in further view of AJADI et al. (Ajadi et al. “Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach”, Remote Sensing 8(6):482. https://doi.org/10.3390/rs8060482. 2016. Year: 2016), hereinafter referenced as AJADI. Regarding claim 2, CHOW in view of HU explicitly teach the image processing device according to claim 1, CHOW in view of HU fail to explicitly teach wherein the one or more processors are configured to execute the software instructions to dilate the object presence area by a predetermined amount in each of the two object presence images. However, AJADI explicitly teaches wherein the one or more processors are configured to execute the software instructions to dilate the object presence area by a predetermined amount (Fig. 1. Page 9, 2nd Paragraph-AJADI discloses morphological filters are defined by structuring element (S), which is based on a moving window of a given size and shape centered on a pixel XLR (wherein the optimal shape and size was found to be a fixed squared shape of 20 x 20 pixels). The two morphological filters used are opening and closing and they are a concatenation of erosion and dilation (wherein closing by reconstruction is dilation followed by a series of erosions). Please also read Page 9, 2nd through 5th Paragraphs) in each of the two object presence images (Fig. 1, called Xn SAR images, Xi images and Xr reference images. Page 1, 2nd Paragraph. Please also read page 5, 2nd Paragraph and page 13, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of having an image processing device comprising: a memory storing software instructions, and one or more processors configured to execute the software instructions to deform object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of AJADI of having wherein the one or more processors are configured to execute the software instructions to dilate the object presence area by a predetermined amount. Wherein having CHOW’s image processing device having wherein the one or more processors are configured to execute the software instructions to dilate the object presence area by a predetermined amount. The motivation behind the modification would have been to obtain an image processing device that improves performance and data robustness across a wide range of spatial scales, since both CHOW and AJADI systems analyze SAR data. Wherein CHOW’s system improves the robustness of data, while AJADI provides automatic and high-performance change detection across a wide range of spatial scales (resolution levels). Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and AJADI et al. (Ajadi et al. “Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach”, Remote Sensing 8(6):482. https://doi.org/10.3390/rs8060482. 2016. Year: 2016), Abstract and Page 25, 2nd through 3rd Paragraph. Regarding claim 7, CHOW in view of HU explicitly teach the image processing method, implemented by a processor, according to claim 6, CHOW in view of HU fail to explicitly teach wherein the object presence area is dilated by a predetermined amount in each of the two object presence images. However, AJADI explicitly teaches wherein the object presence area is dilated by a predetermined amount (Fig. 1. Page 9, 2nd Paragraph-AJADI discloses morphological filters are defined by structuring element (S), which is based on a moving window of a given size and shape centered on a pixel XLR (wherein the optimal shape and size was found to be a fixed squared shape of 20 x 20 pixels). The two morphological filters used are opening and closing and they are a concatenation of erosion and dilation (wherein closing by reconstruction is dilation followed by a series of erosions). Please also read Page 9, 2nd through 5th Paragraphs) in each of the two object presence images (Fig. 1, called Xn SAR images, Xi images and Xr reference images. Page 1, 2nd Paragraph. Please also read page 5, 2nd Paragraph and page 13, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of having an image processing method, implemented by a processor, comprising: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of AJADI of having wherein the object presence area is dilated by a predetermined amount in each of the two object presence images. Wherein having CHOW’s image processing method having wherein the object presence area is dilated by a predetermined amount in each of the two object presence images. The motivation behind the modification would have been to obtain an image processing device that improves performance and data robustness across a wide range of spatial scales, since both CHOW and AJADI systems analyze SAR data. Wherein CHOW’s system improves the robustness of data, while AJADI provides automatic and high-performance change detection across a wide range of spatial scales (resolution levels). Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and AJADI et al. (Ajadi et al. “Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach”, Remote Sensing 8(6):482. https://doi.org/10.3390/rs8060482. 2016. Year: 2016), Abstract and Page 25, 2nd through 3rd Paragraph. Regarding claim 11, CHOW in view of HU explicitly teach the non-transitory computer readable recording medium according to claim 10, CHOW in view of HU fail to explicitly teach wherein the image processing program performs dilating the object presence area by a predetermined amount in each of the two object presence images. However, AJADI explicitly teaches wherein the image processing program performs dilating the object presence area by a predetermined amount (Fig. 1. Page 9, 2nd Paragraph-AJADI discloses morphological filters are defined by structuring element (S), which is based on a moving window of a given size and shape centered on a pixel XLR (wherein the optimal shape and size was found to be a fixed squared shape of 20 x 20 pixels). The two morphological filters used are opening and closing and they are a concatenation of erosion and dilation (wherein closing by reconstruction is dilation followed by a series of erosions). Please also read Page 9, 2nd through 5th Paragraphs) in each of the two object presence images (Fig. 1, called Xn SAR images, Xi images and Xr reference images. Page 1, 2nd Paragraph. Please also read page 5, 2nd Paragraph and page 13, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of having a non-transitory computer readable recording medium storing an image processing program which, when executed by a processor, performs: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of AJADI of having wherein the image processing program performs dilating the object presence area by a predetermined amount in each of the two object presence images. Wherein having CHOW’s image processing device having wherein the image processing program performs dilating the object presence area by a predetermined amount in each of the two object presence images. The motivation behind the modification would have been to obtain an image processing device that improves performance and data robustness across a wide range of spatial scales, since both CHOW and AJADI systems analyze SAR data. Wherein CHOW’s system improves the robustness of data, while AJADI provides automatic and high-performance change detection across a wide range of spatial scales (resolution levels). Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and AJADI et al. (Ajadi et al. “Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach”, Remote Sensing 8(6):482. https://doi.org/10.3390/rs8060482. 2016. Year: 2016), Abstract and Page 25, 2nd through 3rd Paragraph. Claims 5, 9 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over CHOW (US 10032077 B1), hereinafter referenced as CHOW in view of HU et al. (US 20210011149 A1), hereinafter referenced as HU and in further view of BERNAL et al. (US 20150131851 A1), hereinafter referenced as BERNAL. Regarding claim 5, CHOW in view of HU explicitly teach the image processing device according to claim 1, CHOW in view of HU fail to explicitly teach wherein the one or more processors are configured to further execute the software instructions to eliminate areas whose sizes are smaller than a predetermined value determined based on a width of the object However, BERNAL explicitly teaches wherein the one or more processors are configured to further execute the software instructions to eliminate areas whose sizes are smaller than a predetermined value determined based on a width of the object (Fig. 2. Paragraph [0044]-BERNAL discloses the size and orientation determination unit 118 (which is aware of the predominant object size and orientation of an object 140 as a function of location) creates the required structuring elements 164 for the morphological operations related with the computation of the foreground/motion binary mask, e.g., 404, 410, 416. The morphological operations perform hole-filling in masks that result from the initial thresholding operation, as well as removal of identified objects with sizes and/or orientations outside a pre-specified range depending on the object location. A structuring element 164 of a given width and height can be used as an erosion or opening element on a binary mask 142 containing identified foreground or moving objects so that objects with width or height smaller than those of the structuring element 164 will be eliminated from the mask 142). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of having an image processing device comprising: a memory storing software instructions, and one or more processors configured to execute the software instructions to deform object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of BERNAL of having wherein the one or more processors are configured to further execute the software instructions to eliminate areas whose sizes are smaller than a predetermined value determined based on a width of the object. Wherein having CHOW’s image processing device having wherein the one or more processors are configured to further execute the software instructions to eliminate areas whose sizes are smaller than a predetermined value determined based on a width of the object. The motivation behind the modification would have been to obtain an image processing device that improves efficient object tracking and data robustness, since both CHOW and BERNAL systems are used for the analysis of image data. Wherein CHOW’s system improves the robustness of data, while BERNAL system achieves robust and computationally efficient tracking. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and BERNAL et al. (US 20150131851 A1), Abstract and Paragraph [0035]. Regarding claim 9, CHOW in view of HU explicitly teach the image processing method, implemented by a processor, according to claim 6, CHOW in view of HU fail to explicitly teach further comprising eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object. However, BERNAL explicitly teaches further comprising eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object (Fig. 2. Paragraph [0044]-BERNAL discloses the size and orientation determination unit 118 (which is aware of the predominant object size and orientation of an object 140 as a function of location) creates the required structuring elements 164 for the morphological operations related with the computation of the foreground/motion binary mask, e.g., 404, 410, 416. The morphological operations perform hole-filling in masks that result from the initial thresholding operation, as well as removal of identified objects with sizes and/or orientations outside a pre-specified range depending on the object location. A structuring element 164 of a given width and height can be used as an erosion or opening element on a binary mask 142 containing identified foreground or moving objects so that objects with width or height smaller than those of the structuring element 164 will be eliminated from the mask 142). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of an image processing method, implemented by a processor, comprising: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, based on an observation angle of each of the two observed images and a size of the object appearing in each of the two observed images, and generating a synthesized image by synthesizing the two deformed images, determining difference of the object between the two object presence images, and generating an image capable of identifying the determined difference, with the teachings of BERNAL of having further comprising eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object. Wherein having CHOW’s image processing method having wherein the one or more processors are configured to execute the software instructions to dilate the object presence area by a predetermined amount. The motivation behind the modification would have been to obtain an image processing device that improves efficient object tracking and data robustness, since both CHOW and BERNAL systems are used for the analysis of image data. Wherein CHOW’s system improves the robustness of data, while BERNAL system achieves robust and computationally efficient tracking. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and BERNAL et al. (US 20150131851 A1), Abstract and Paragraph [0035]. Regarding claim 13, CHOW in view of HU explicitly teach the recording medium according to claim10, CHOW in view of HU fail to explicitly teach wherein the image processing program performs eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object. However, BERNAL explicitly teaches wherein the image processing program performs eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object (Fig. 2. Paragraph [0044]-BERNAL discloses the size and orientation determination unit 118 (which is aware of the predominant object size and orientation of an object 140 as a function of location) creates the required structuring elements 164 for the morphological operations related with the computation of the foreground/motion binary mask, e.g., 404, 410, 416. The morphological operations perform hole-filling in masks that result from the initial thresholding operation, as well as removal of identified objects with sizes and/or orientations outside a pre-specified range depending on the object location. A structuring element 164 of a given width and height can be used as an erosion or opening element on a binary mask 142 containing identified foreground or moving objects so that objects with width or height smaller than those of the structuring element 164 will be eliminated from the mask 142). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOW in view of HU of having a non-transitory computer readable recording medium storing an image processing program which, when executed by a processor, performs: deforming object presence areas in two object presence images, in which one or more objects are present, obtained from each of two observed images to generate two deformed images, with the teachings of BERNAL of having wherein the image processing program performs eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object. Wherein having CHOW’s image processing device having wherein the image processing program performs eliminating areas whose sizes are smaller than a predetermined value determined based on a width of the object. The motivation behind the modification would have been to obtain an image processing device that improves efficient object tracking and data robustness, since both CHOW and BERNAL systems are used for the analysis of image data. Wherein CHOW’s system improves the robustness of data, while BERNAL system achieves robust and computationally efficient tracking. Please see CHOW (US 10032077 B1), Abstract and Column [03], Line [22-35] and BERNAL et al. (US 20150131851 A1), Abstract and Paragraph [0035]. Claims 3, 8 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over CHOW (US 10032077 B1), hereinafter referenced as CHOW in view of HU et al. (US 20210011149 A1), hereinafter referenced as HU and in further view of AJADI et al. (Ajadi et al. “Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach”, Remote Sensing 8(6):482. https://doi.org/10.3390/rs8060482. 2016. Year: 2016), hereinafter referenced as AJADI and in further view of HAINLINE et al. (US 10553020 B1), hereinafter referenced as HAINLINE Regarding claim 3, CHOW in view of HU and in further view of AJADI explicitly teach the image processing device according to claim 2, CHOW in view of HU and in further view of AJADI fail to explicitly teach wherein the one or more processors are configured to execute the software instructions to dilate the object presence area
Read full office action

Prosecution Timeline

Jan 12, 2023
Application Filed
Feb 07, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Oct 04, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555249
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR SUPPORTING VIRTUAL GOLF SIMULATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548171
INFORMATION PROCESSING APPARATUS, METHOD AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541822
METHOD AND APPARATUS OF PROCESSING IMAGE, COMPUTING DEVICE, AND MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12505503
IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 23, 2025
Patent 12482106
METHOD AND ELECTRONIC DEVICE FOR SEGMENTING OBJECTS IN SCENE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month