Prosecution Insights
Last updated: April 19, 2026
Application No. 18/331,416

METHOD AND ELECTRONIC SYSTEM FOR IMAGE ALIGNMENT

Final Rejection §103
Filed
Jun 08, 2023
Examiner
SATCHER, DION JOHN
Art Unit
2676
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
33 granted / 39 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 39 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s Amendments filed on 11/19/2025 has been entered and made of record. Currently pending Claim(s): Independent Claim(s): Amended Claim(s): Cancelled Claim(s): 1–4, 6–18 and 20 1 and 17 1, 6 and 17 5 and 19 Response to Applicant’s Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 11/19/2025. In view of applicant Arguments/Remarks and amendment filed on 11/19/2025 with respect to independent claims 1 and 17 under 35 U.S.C 103, claim rejection has been fully considered and the arguments are found to be not persuasive (See Page(s) 8–10), therefore the claim rejection with respect to 35 U.S.C. 103 still applies. Applicant argues, in summary the applied prior art (Miller) and (Ono) does not disclose or suggest (see page 9 and 10): “storing the first feature correspondence between the first image and the second image into a warping map” Examiner is incorporating one of the prior art cited Han (NPL, “An Approach to Fine Coregistration Between Very High Resolution Multispectral Images Based on Registration Noise Distribution”) to teach storing into a warping map (e.g., a data structure). Han teaches “See Han, [Pg. 6651, Col. 2, ln. 1–3], The matching is conducted at a local level after estimating the amount of displacement. A deformation map is generated by interpolation using matched CP pairs” a deformation map is the warping map data structure for warping the images. Han takes two images and calculates the CP (distinct points) then creates the deformation map based on the deformations of the CPs which the Examiner is interpreting as the feature correspondence. Miller teaches “See Miller, ¶ [0051], The two scans are registered using the image corresponding to the blue spectral band (i.e., the shared spectral band)“ having the first feature correspondence between the first and second image. Therefore, with this broad interpretation, Miller and Ono in combination with Han teaches, discloses or suggests the Applicant’s invention, calculating a first feature correspondence between a first and second image and performing image alignment on the third and fourth image based on the feature correspondence. The feature correspondence stored in a warping map. Thus, due to the Applicant’s broad claim language, Applicant’s invention is not far removed from the art of record. Accordingly, these limitations do not render claims patentably distinct over the prior art of record. As a result, it is respectfully submitted that the present application is not in condition for allowance. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1–4, 6 and 10–14 are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 20140193061 A1, hereafter, "Miller") in view of Ono (US 20190273862 A1, hereafter, "Ono") in further view of Han et al. (See NPL attached, "An Approach to Fine Coregistration Between Very High Resolution Multispectral Images Based on Registration Noise Distribution", hereafter, "Han"). Regarding claim 1, Miller teaches a method for image alignment (See Miller, [Abstract], Aligning the first and second pluralities of images based on information from a first image from the first plurality of images and a second image from the second plurality of images), comprising: [receiving a first image with a first property from a first sensor; receiving a second image with a second property from a second sensor, wherein the first property is similar to the second property; calculating a first feature correspondence between the first image and the second image]; receiving a third image with a third property from the first sensor and a fourth image with a fourth property from the second image sensor, wherein the third property is different from the fourth property (See Miller, ¶ [0038], Thus, for example, the first set of images can include an image corresponding to emission from DAPI (in a first wavelength band) and also, e.g., two or three other images corresponding to emission from the sample in two or three other wavelength bands. Similarly, the second set of images can include an image corresponding to emission from DAPI in the first wavelength band, and also, e.g., two or three other images corresponding to sample emission in two or three other wavelength bands. Note: The plurality other images are being interpreted as the third and fourth image); and performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image (See Miller, ¶ [0040], To correct this problem, the images from the two scans can be aligned to a common registration using the images corresponding to the wavelength band that is shared among the first and second scans (e.g., the image that corresponds to emission from DAPI in the example above). The same shift or image transformation that yields the best alignment in this shared band is applied to all images in the scan, after which the two scans can be combined into an image cube. Note: Examiner is interpreting the best alignment of the shared band as the first and second image feature correspondence and the aligning of the other images as aligning a 3rd and 4th image); and [storing] the first feature correspondence between the first image and the second image (See Miller, ¶ [0051], The two scans are registered using the image corresponding to the blue spectral band (i.e., the shared spectral band)) [into a warping map]. However, Miller fail(s) to teach receiving a first image with a first property from a first sensor; receiving a second image with a second property from a second sensor, wherein the first property is similar to the second property; calculating a first feature correspondence between the first image and the second image; storing into a warping map. Ono, working in the same field of endeavor, teaches: receiving a first image with a first property from a first sensor (See Ono, ¶ [0080], The optical filter 112 of the first imaging unit 110 is an optical filter that transmits light having a plurality of wavelength ranges, and transmits different wavelength ranges depending on regions. Specifically, as shown in FIG. 10, ¾ of the entire region is a region 112A (single wavelength range optical filter) through which light having a first wavelength range is transmitted at 100%, and ¼ of the entire region is a region 112B (single wavelength range optical filter) through which light having a second wavelength range is transmitted at 100% (it is assumed that the shapes and sizes of the regions 112A and 112B are fixed)); receiving a second image with a second property from a second sensor, wherein the first property is similar to the second property (See Ono, ¶ [0080], The optical filter 122 of the second imaging unit 120 is an optical filter that transmits light having a single wavelength range. As shown in FIG. 10, ¼ of the entire region is a region 122A (single wavelength range optical filter) through which light having the first wavelength range is transmitted at 100%, and ¾ of the entire region is a region 122B (single wavelength range optical filter) through which light having the second wavelength range is transmitted at 100% (it is assumed that the shapes and sizes of the regions 122A and 122B are fixed). Note: Examiner is interpreting the first wavelength and second wavelength as the common property); calculating a first feature correspondence between the first image and the second image (See Ono, ¶ [0088], In Step S130 (correspondence point detection step), feature points are detected by the correspondence point detection unit 210E based on a component of a wavelength range of a plurality of image signals corresponding to a plurality of images common among the images, and correspondence points are detected based on the feature points. As described above, for example, the point of the edge or the corner portion is detected as the feature point of the reference image, and the correspondence point can be detected in another image through matching between the images); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to receiving a first image with a first property from a first sensor; receiving a second image with a second property from a second sensor, wherein the first property is similar to the second property; calculating a first feature correspondence between the first image and the second image based on the method of Ono’s reference. The suggestion/motivation would have been to register multiple images having different wavelengths with high accuracy (See Ono, ¶ [0005–0007]). However, Miller and Ono fail(s) to teach storing into a warping map. Han, working in the same field of endeavor, teaches: storing into a warping map (See Han, [Pg. 6651, Col. 2, ln. 1–3], The matching is conducted at a local level after estimating the amount of displacement. A deformation map is generated by interpolation using matched CP pairs). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference storing into a warping map based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ono and Han with Miller to obtain the invention as specified in claim 1. Regarding claim 2, Miller teaches the method as claimed in claim 1, wherein the first property represents a first spectrum range of the first image, and the second property represents a second spectrum range of the second image, wherein the second spectrum range is similar to the first spectrum range (See Miller, ¶ [0036], To achieve improve scanning speeds and image registration among images corresponding to different wavelength bands, the methods and systems disclosed herein are configured to perform a scan of a sample using M signal bands (e.g., at M different spectral bands), then scan it again using N signal bands, in which the first set of M signal bands and the second set of N signal bands have a spectral band that is shared. ¶ [0040], To correct this problem, the images from the two scans can be aligned to a common registration using the images corresponding to the wavelength band that is shared among the first and second scans (e.g., the image that corresponds to emission from DAPI in the example above). The same shift or image transformation that yields the best alignment in this shared band is applied to all images in the scan. Note: Examiner is interpreting the best alignment is the first and second image and the rest of the aligned images are being interpreted as the third and fourth. The set of images have a shared band that are similar to each other). Regarding claim 3, Miller teaches the method as claimed in claim 2, wherein the third property represents a third spectrum range of the third image, and the fourth property represents a fourth spectrum range of the fourth image, wherein the third spectrum range is similar to the first spectrum range and different from the fourth spectrum range (See Miller, ¶ [0038], Thus, for example, the first set of images can include an image corresponding to emission from DAPI (in a first wavelength band) and also, e.g., two or three other images corresponding to emission from the sample in two or three other wavelength bands. Similarly, the second set of images can include an image corresponding to emission from DAPI in the first wavelength band, and also, e.g., two or three other images corresponding to sample emission in two or three other wavelength bands. Note: The plurality of images in being interpreted as the third and fourth image. Note: Examiner is interpreting the other wavelength images that are not the first wavelength as different wavelengths). Regarding claim 4, Miller teaches the method as claimed in claim 3, wherein the first image and the second image are received earlier than the third image and the fourth image (See Miller, ¶ [0039], While the M bands acquired during the first scan are registered among themselves, and the N bands acquired during the second scan are registered among themselves, images of the first scan and images of the second scan are, in general, misaligned due to the limited mechanical repeatability of the scanner. Note: Examiner is interpreting the first scan as the first and second images that are earlier and the second scan as the third and fourth images that are later). Regarding claim 6, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 1, [wherein the warping map records a first displacement vector of each pixel between the first image and the second image]. However, Miller and Ono fail(s) to teach wherein the warping map records a first displacement vector of each pixel between the first image and the second image. Han, working in the same field of endeavor, teaches: wherein the warping map records a first displacement vector of each pixel between the first image and the second image (See Han, [Pg. 5, C. Generation of the Deformation Map and Image Warping], Thus, we generate a deformation map DM using CP pairs. The deformation map is represented by a displacement vector associated to every pixel of the master image [29]. The CP pairs extracted by the proposed technique are irregularly scattered in the image. Thus, interpolation is used to estimate the deformation in an appropriate way for the irregular CPs. We apply a natural neighbor interpolation to generate a deformation grid DG, which is a 2-D vector of regularly sampled displacements in the x-direction and y-direction [39]). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the warping map records a first displacement vector of each pixel between the first image and the second image based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller and Ono to obtain the invention as specified in claim 6. Regarding claim 10, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 6, [wherein the step of calculating the first feature correspondence between the first image and the second image comprises: performing feature extraction on each pixel in the first image and the second image to obtain respective pixel features; and performing feature matching between each pixel in the first image and each pixel in the second image to obtain the first displacement vector]. However, Miller and Ono fail(s) to teach wherein the step of calculating the first feature correspondence between the first image and the second image comprises: performing feature extraction on each pixel in the first image and the second image to obtain respective pixel features; and performing feature matching between each pixel in the first image and each pixel in the second image to obtain the first displacement vector. Han, working in the same field of endeavor, teaches: wherein the step of calculating the first feature correspondence between the first image and the second image comprises: performing feature extraction on each pixel in the first image and the second image to obtain respective pixel features (See Han, [Pg. 3, A. CPs Extraction Based on RN Distribution], In a general feature-based matching process for VHR images, CPs are extracted on each image and matched themselves by directly using intensity values of their neighboring pixels or by generating description vectors to estimate similarity); and performing feature matching between each pixel in the first image and each pixel in the second image to obtain the first displacement vector (See Han, [Pg. 5, C. Generation of the Deformation Map and Image Warping], Finally, cubic spline interpolation method is applied to the deformation grid to generate the deformation map. The last step consists in the warping of the slave image to the master one according to the obtained deformation map DM). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the step of calculating the first feature correspondence between the first image and the second image comprises: performing feature extraction on each pixel in the first image and the second image to obtain respective pixel features; and performing feature matching between each pixel in the first image and each pixel in the second image to obtain the first displacement vector based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller and Ono to obtain the invention as specified in claim 10. Regarding claim 11, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 10, [wherein the step of performing feature matching between each pixel in the first image and each pixel in the second image comprises: searching for and recording a position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image; and generating the first displacement vector according to the position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image]. However, Miller and Ono fail(s) to teach wherein the step of performing feature matching between each pixel in the first image and each pixel in the second image comprises: searching for and recording a position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image; and generating the first displacement vector according to the position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image. Han, working in the same field of endeavor, teaches: wherein the step of performing feature matching between each pixel in the first image and each pixel in the second image comprises: searching for and recording a position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image (See Han, [Pg. 3, A. CPs Extraction Based on RN Distribution], In a general feature-based matching process for VHR images, CPs are extracted on each image and matched themselves by directly using intensity values of their neighboring pixels or by generating description vectors to estimate similarity); and generating the first displacement vector according to the position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image (See Han, [Pg. 5, C. Generation of the Deformation Map and Image Warping], Finally, cubic spline interpolation method is applied to the deformation grid to generate the deformation map. The last step consists in the warping of the slave image to the master one according to the obtained deformation map DM). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the step of performing feature matching between each pixel in the first image and each pixel in the second image comprises: searching for and recording a position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image; and generating the first displacement vector according to the position of the pixel features in the first image corresponding to the pixel features with the highest similarity in the second image based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller and Ono to obtain the invention as specified in claim 11. Regarding claim 12, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 10, [wherein the pixel features comprise brightness, color, and texture]. However, Miller and Ono fail(s) to teach wherein the pixel features comprise brightness, color, and texture. Ono, working in the same field of endeavor, teaches: wherein the pixel features comprise brightness, color, and texture (See Ono, ¶ [0056], As a method of correspondence point detection and registration, various known methods (for example, a point of an edge or a corner portion is detected as a feature point of a reference image, a correspondence point is detected in another image through matching between images, and the images are moved, rotated, enlarged, and/or reduced such that the positions of the feature point and the correspondence point coincide with each other). Note: the edge and corner portion represent changes in intensity/brightness which is brightness and color. The edges show the texture of the image). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the pixel features comprise brightness, color, and texture based on the method of Ono’s reference. The suggestion/motivation would have been to register multiple images having different wavelengths with high accuracy (See Ono, ¶ [0005–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ono with Miller and Han to obtain the invention as specified in claim 12. Regarding claim 13, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 10, wherein the step of performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image (See Miller, ¶ [0040], To correct this problem, the images from the two scans can be aligned to a common registration using the images corresponding to the wavelength band that is shared among the first and second scans (e.g., the image that corresponds to emission from DAPI in the example above). The same shift or image transformation that yields the best alignment in this shared band is applied to all images in the scan, after which the two scans can be combined into an image cube. Note: Examiner is interpreting the best alignment of the shared band as the first and second feature correspondence and the aligning of the other images as aligning a 3rd and 4th image) comprises: [generating a warping function according to the pixel features in both the first image and the second image with the highest discrimination and the highest similarity]; and inputting the third image or the fourth image into the warping function to perform image alignment between the third image and the fourth image (See Miller, ¶ [0054], More generally, any one or more transformations, including translations, rotations, magnifications, and/or image warping, can be used to register images of the first and second scans to correct for imaging variations between scans). However, Miller and Ono fail(s) to teach generating a warping function according to the pixel features in both the first image and the second image with the highest discrimination and the highest similarity. Han, working in the same field of endeavor, teaches: generating a warping function according to the pixel features in both the first image and the second image with the highest discrimination and the highest similarity (See Han, [Pg. 1, I. INTRODUCTION], Most of the coregistration procedures between multitemporal images consist of four steps. First, control points (CPs), which are the objects that correspond to distinctive and representative points of the investigated scene, are extracted from each image independently. [A. CPs Extraction Based on RN Distribution], In a general feature-based matching process for VHR images, CPs are extracted on each image and matched themselves by directly using intensity values of their neighboring pixels or by generating description vectors to estimate similarity Note: The CPs are points represent distinct/unique points in the image); and Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to generating a warping function according to the pixel features in both the first image and the second image with the highest discrimination and the highest similarity based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller and Ono to obtain the invention as specified in claim 13. Regarding claim 14, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 10, [wherein the step of performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image comprises: converting the position of each pixel in the third image to the position of each pixel in the fourth image through the first displacement vector to perform image alignment between the third image and the fourth image]. However, Miller and Ono fail(s) to teach wherein the step of performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image comprises: converting the position of each pixel in the third image to the position of each pixel in the fourth image through the first displacement vector to perform image alignment between the third image and the fourth image. Han, working in the same field of endeavor, teaches: wherein the step of performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image comprises: converting the position of each pixel in the third image to the position of each pixel in the fourth image through the first displacement vector to perform image alignment between the third image and the fourth image (See Han, See Han, [C. Generation of the Deformation Map and Image Warping], Thus, we generate a deformation map DM using CP pairs. The deformation map is represented by a displacement vector associated to every pixel of the master image [29]. The CP pairs extracted by the proposed technique are irregularly scattered in the image. Thus, interpolation is used to estimate the deformation in an appropriate way for the irregular CPs. We apply a natural neighbor interpolation to generate a deformation grid DG, which is a 2-D vector of regularly sampled displacements in the x-direction and y-direction [39]). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the step of performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image comprises: converting the position of each pixel in the third image to the position of each pixel in the fourth image through the first displacement vector to perform image alignment between the third image and the fourth image based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller and Ono to obtain the invention as specified in claim 14. Claim(s) 7–9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 20140193061 A1, hereafter, "Miller") in view of Ono (US 20190273862 A1, hereafter, "Ono") further view of Han et al. (See NPL attached, "An Approach to Fine Coregistration Between Very High Resolution Multispectral Images Based on Registration Noise Distribution", hereafter, "Han") and further in view of Riley et al. (US 20100189363 A1, hereafter, "Riley"). Regarding claim 7, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 3, further comprising: [comparing the third image with the first image to obtain a comparison result]; and determining whether to perform image alignment on the third image and the fourth image based on the first feature correspondence according to the comparison result (See Miller, ¶ [0040], To correct this problem, the images from the two scans can be aligned to a common registration using the images corresponding to the wavelength band that is shared among the first and second scans (e.g., the image that corresponds to emission from DAPI in the example above). Note: The second plurality of scan is being interpreted as the third and fourth image and the alignment between the first scan and the second scan is being interpreted as aligning the third image (second scan) and first image (first scan)). However, Miller, Ono and Han fail(s) to teach comparing the third image with the first image to obtain a comparison result. Riley, working in the same field of endeavor, teaches: comparing the third image with the first image to obtain a comparison result (See Riley, ¶ [0007], The processing element is further configured for generating an alternate mapping between the first and second sets of imagery data based on the comparing between at least a first area of the pixels in the first image and at least a first area of the pixels in the third image that are non-corresponding according to the first mapping function). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to comparing the third image with the first image to obtain a comparison result based on the method of Riley’s reference. The suggestion/motivation would have been to accurately register images taking into account moving objects (See Riley, ¶ [0004–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Riley with Miller, Ono and Han to obtain the invention as specified in claim 7. Regarding claim 8, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 7, [wherein the comparison result indicates that the third image matches the first image, or the third image does not match the first image]. However, Miller, Ono and Han fail(s) to teach wherein the comparison result indicates that the third image matches the first image, or the third image does not match the first image. Riley, working in the same field of endeavor, teaches: wherein the comparison result indicates that the third image matches the first image, or the third image does not match the first image (See Riley, ¶ [0007], The processing element is further configured for generating an alternate mapping between the first and second sets of imagery data based on the comparing between at least a first area of the pixels in the first image and at least a first area of the pixels in the third image that are non-corresponding according to the first mapping function). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to wherein the comparison result indicates that the third image matches the first image, or the third image does not match the first image based on the method of Riley’s reference. The suggestion/motivation would have been to accurately register images taking into account moving objects (See Riley, ¶ [0004–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Riley with Miller, Ono and Han to obtain the invention as specified in claim 8. Regarding claim 9, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 7, [wherein if the third image does not match the first image, the method further comprises: calculating a second feature correspondence between the third image and the fourth image; and performing image alignment on the third image and the fourth image based on the second feature correspondence between the third image and the fourth image]. However, Miller, Ono and Han fail(s) to teach wherein if the third image does not match the first image, the method further comprises: calculating a second feature correspondence between the third image and the fourth image; and performing image alignment on the third image and the fourth image based on the second feature correspondence between the third image and the fourth image. Riley, working in the same field of endeavor, teaches: wherein if the third image does not match the first image, the method further comprises: calculating a second feature correspondence between the third image and the fourth image; and performing image alignment on the third image and the fourth image based on the second feature correspondence between the third image and the fourth image (See Riley, ¶ [0007], The processing element is further configured for generating an alternate mapping between the first and second sets of imagery data based on the comparing between at least a first area of the pixels in the first image and at least a first area of the pixels in the third image that are non-corresponding according to the first mapping function). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference wherein if the third image does not match the first image, the method further comprises: calculating a second feature correspondence between the third image and the fourth image; and performing image alignment on the third image and the fourth image based on the second feature correspondence between the third image and the fourth image based on the method of Riley’s reference. The suggestion/motivation would have been to accurately register images taking into account moving objects (See Riley, ¶ [0004–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Riley with Miller, Ono and Han to obtain the invention as specified in claim 9. Regarding claim 15, Miller in view of Ono further in view of Han and further in view of Riley teaches the method as claimed in claim 9, further comprising: [storing the second feature correspondence between the third image and the fourth image into a warping map]. However, Miller, Ono and Riley fail(s) to teach storing the second feature correspondence between the third image and the fourth image into a warping map. Han, working in the same field of endeavor, teaches: storing the second feature correspondence between the third image and the fourth image into a warping map (See Han, See Han, [C. Generation of the Deformation Map and Image Warping], Thus, we generate a deformation map DM using CP pairs. The deformation map is represented by a displacement vector associated to every pixel of the master image [29]. The CP pairs extracted by the proposed technique are irregularly scattered in the image. Thus, interpolation is used to estimate the deformation in an appropriate way for the irregular CPs. We apply a natural neighbor interpolation to generate a deformation grid DG, which is a 2-D vector of regularly sampled displacements in the x-direction and y-direction [39]). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference storing the second feature correspondence between the third image and the fourth image into a warping map based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller, Ono and Riley to obtain the invention as specified in claim 15. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 20140193061 A1, hereafter, "Miller") in view of Ono (US 20190273862 A1, hereafter, "Ono") further view of Han et al. (See NPL attached, "An Approach to Fine Coregistration Between Very High Resolution Multispectral Images Based on Registration Noise Distribution", hereafter, "Han") and further in view of Wei et al. (See NPL attached, "Hyperspectral and Multispectral Image Fusion Based on a Sparse Representation", hereafter, "Wei"). Regarding claim 16, Miller in view of Ono and further in view of Han teaches the method as claimed in claim 1, further comprising: [performing image fusion on the third image and the fourth image after the image alignment to output a fusion image]. However, Miller, Ono and Han fail(s) to teach performing image fusion on the third image and the fourth image after the image alignment to output a fusion image. Wei, working in the same field of endeavor, teaches: performing image fusion on the third image and the fourth image after the image alignment to output a fusion image (See Wei, [Pg. 1, Introduction], In this paper, we propose to fuse HS and MS images within a constrained optimization framework, by incorporating a sparse regularization using dictionaries learned from the observed images). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to performing image fusion on the third image and the fourth image after the image alignment to output a fusion image based on the method of Wei’s reference. The suggestion/motivation would have been to increase the performance of the fusion and quality, (See Wei, [Pg. 7–8, C. Fusion Quality Metrics] and [Pg. 8, TABLE 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wei with Miller, Ono and Han to obtain the invention as specified in claim 16. Claim(s) 17–18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 20140193061 A1, hereafter, "Miller") in view of Hamaguchi et al. (US 11,099,008 B2, hereafter, “Hamaguchi”) further in view of Ono (US 20190273862 A1, hereafter, "Ono") and further in view of Han et al. (See NPL attached, "An Approach to Fine Coregistration Between Very High Resolution Multispectral Images Based on Registration Noise Distribution", hereafter, "Han"). Regarding claim 17, Miller teaches an electronic system, comprising: [a first sensor, configured to output a first image and a third image according to a first property; a second sensor, configured to output a second image according to the first property and output a fourth image according to a second property, wherein the second property is different from the first property; a processor, configured to perform the following steps: receiving the first image from the first sensor; receiving the second image from the second sensor; calculating a first feature correspondence between the first image and the second image; receiving the third image from the first sensor and the fourth image from the second sensor]; and performing image alignment on the third image and the fourth image based on the first feature correspondence between the first image and the second image (See Miller, ¶ [0040], To correct this problem, the images from the two scans can be aligned to a common registration using the images corresponding to the wavelength band that is shared among the first and second scans (e.g., the image that corresponds to emission from DAPI in the example above). The same shift or image transformation that yields the best alignment in this shared band is applied to all images in the scan, after which the two scans can be combined into an image cube. Note: Examiner is interpreting the best alignment of the shared band as the first and second feature correspondence and the aligning of the other images as aligning a 3rd and 4th image) ; and [storing] the first feature correspondence between the first image and the second image (See Miller, ¶ [0051], The two scans are registered using the image corresponding to the blue spectral band (i.e., the shared spectral band)) [into a warping map]. However, Miller fail(s) to teach a first sensor, configured to output a first image and a third image according to a first property; a second sensor, configured to output a second image according to the first property and output a fourth image according to a second property, wherein the second property is different from the first property; a processor, configured to perform the following steps: receiving the first image from the first sensor; receiving the second image from the second sensor; receiving the third image from the first sensor and the fourth image from the second sensor; storing into a warping map. Hamaguchi, working in the same field of endeavor, teaches: a first sensor, configured to output a first image and a third image according to a first property (See Hamaguchi, [Col. 15, ln. 36–40], See Hamaguchi, [Col. 15, ln. 40–44], The first image signal 40 obtained from the first image capture device 120A is referred to as a "first signal-A", and the first image signal obtained from the second image capture device 120B is referred to as a "first image signal-B''. [Col. 15, ln. 44–47], Further, each of the image capture devices 120A and 120B captures the image of at least the subject 140 in the low brightness irradiation state to obtain the second image signal. [Col. 15, ln. 50–54], The second image signal obtained from the first image capture device 120A is referred to as a "second signal-A", and the second image signal obtained from the second image capture device 120B is referred to as a "second image signal-B'. [Col. 16, ln. 25–27], Ambient light is included in any signal among the first image signal-A and first image signal-B, and the second image signal-A and the second image signal-B. Note: Examiner is interpreting the first image as the first signal-A and the third image as the second signal-A. The first property is being interpreted as the ambient light); a second sensor, configured to output a second image according to the first property and output a fourth image according to a second property, wherein the second property is different from the first property (See Hamaguchi, [Col. 15, ln. 40–44], The first image signal 40 obtained from the first image capture device 120A is referred to as a "first signal-A", and the first image signal obtained from the second image capture device 120B is referred to as a "first image signal-B''. [Col. 15, ln. 44–47], Further, each of the image capture devices 120A and 120B captures the image of at least the subject 140 in the low brightness irradiation state to obtain the second image signal. [Col. 15, ln. 50–54], The second image signal obtained from the first image capture device 120A is referred to as a "second signal-A", and the second image signal obtained from the second image capture device 120B is referred to as a "second image signal-B'. [Col. 16, ln. 25–27], Ambient light is included in any signal among the first image signal-A and first image signal-B, and the second image signal-A and the second image signal-B. Note: Examiner is interpreting the second image as the first signal-B and the fourth image as the second signal-B and the second property as the low brightness irradiation state); a processor, configured to perform the following steps: receiving the first image from the first sensor; receiving the second image from the second sensor (See Hamaguchi, [Col. 15, ln. 40–44], The first image signal 40 obtained from the first image capture device 120A is referred to as a "first signal-A", and the first image signal obtained from the second image capture device 120B is referred to as a "first image signal-B''. [Col. 15, ln. 50–54], The second image signal obtained from the first image capture device 120A is referred to as a "second signal-A", and the second image signal obtained from the second image capture device 120B is referred to as a "second image signal-B'. Note: Examiner is interpreting the first image as the first signal-A and the second image as the first signal-B); receiving the third image from the first sensor and the fourth image from the second sensor (See Hamaguchi, [Col. 15, ln. 40–44], The first image signal 40 obtained from the first image capture device 120A is referred to as a "first signal-A", and the first image signal obtained from the second image capture device 120B is referred to as a "first image signal-B''. [Col. 15, ln. 50–54], The second image signal obtained from the first image capture device 120A is referred to as a "second signal-A", and the second image signal obtained from the second image capture device 120B is referred to as a "second image signal-B'. Note: Examiner is interpreting the third image as the second signal-A and the fourth image as the second signal-B). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to a first sensor, configured to output a first image and a third image according to a first property; a second sensor, configured to output a second image according to the first property and output a fourth image according to a second property, wherein the second property is different from the first property; a processor, configured to perform the following steps: receiving the first image from the first sensor; receiving the second image from the second sensor; receiving the third image from the first sensor and the fourth image from the second sensor based on the method of Hamaguchi’s reference. The suggestion/motivation would have been to remove the influence of ambient light to enhance the detection of reference light (See Hamaguchi, [Col. 2, ln. 5–32]). However, Miller and Hamaguchi fail(s) to teach calculating a first feature correspondence between the first image and the second image; storing the first feature correspondence between the first image and the second image into a warping map. Ono, working in the same field of endeavor, teaches: calculating a first feature correspondence between the first image and the second image (See Ono, ¶ [0088], In Step S130 (correspondence point detection step), feature points are detected by the correspondence point detection unit 210E based on a component of a wavelength range of a plurality of image signals corresponding to a plurality of images common among the images, and correspondence points are detected based on the feature points. As described above, for example, the point of the edge or the corner portion is detected as the feature point of the reference image, and the correspondence point can be detected in another image through matching between the images). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference to calculating a first feature correspondence between the first image and the second image based on the method of Ono’s reference. The suggestion/motivation would have been to register multiple images having different wavelengths with high accuracy (See Ono, ¶ [0005–0007]). However, Miller, Hamaguchi and Ono fail(s) to teach storing the first feature correspondence between the first image and the second image into a warping map. Han, working in the same field of endeavor, teaches: storing into a warping map (See Han, [Pg. 6651, Col. 2, ln. 1–3], The matching is conducted at a local level after estimating the amount of displacement. A deformation map is generated by interpolation using matched CP pairs). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference storing the first feature correspondence between the first image and the second image into a warping map based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Hamaguchi, Ono and Han with Miller to obtain the invention as specified in claim 17. Regarding claim 18, claim 18 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 18, and all of the other limitations similar to claim 4 are not repeated herein, but incorporated by reference. Regarding claim 20, Miller in view of Hamaguchi further in view of Ono and further in view of Han teaches the electronic system as claimed in claim 17, [wherein the warping map records a first displacement vector of each pixel between the first image and the second image]. However, Miller fail(s) to teach wherein the warping map records a first displacement vector of each pixel between the first image and the second image. Han, working in the same field of endeavor, teaches: wherein the warping map records a first displacement vector of each pixel between the first image and the second image (See Han, See Han, [C. Generation of the Deformation Map and Image Warping], Thus, we generate a deformation map DM using CP pairs. The deformation map is represented by a displacement vector associated to every pixel of the master image [29]. The CP pairs extracted by the proposed technique are irregularly scattered in the image. Thus, interpolation is used to estimate the deformation in an appropriate way for the irregular CPs. We apply a natural neighbor interpolation to generate a deformation grid DG, which is a 2-D vector of regularly sampled displacements in the x-direction and y-direction [39]). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Miller’s reference wherein the warping map records a first displacement vector of each pixel between the first image and the second image based on the method of Han’s reference. The suggestion/motivation would have been to improve the registration accuracy (See Han, [Pg. 9–10, A. Results: Simulated Data Set]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Han with Miller, Hamaguchi and Ono to obtain the invention as specified in claim 20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yuan et al. (US 20200342570 A1) teaches In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images. Kong et al. (US 20210027475 A1) teaches A first image is captured at a first time and a second image is captured at a second time. A feature-based image registration is performed to create a third image. An intensity-based image registration is performed to create a fourth image. Registration errors are determined based on a comparison of the first and fourth images. A feature enhancement process is performed on the registration errors to determine whether the fastener has loosened between the first time and the second time. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DION J SATCHER/Patent Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
Aug 19, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586218
MOTION ESTIMATION WITH ANATOMICAL INTEGRITY
2y 5m to grant Granted Mar 24, 2026
Patent 12579787
INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12573066
Depth Estimation Using a Single Near-Infrared Camera and Dot Illuminator
2y 5m to grant Granted Mar 10, 2026
Patent 12555263
SYSTEMS AND METHODS FOR TWO-STAGE OBJECTION DETECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12548140
DETERMINING PROCESS DEVIATIONS THROUGH VIDEO ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 39 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month