Prosecution Insights
Last updated: April 19, 2026
Application No. 18/064,769

IMAGE PROCESSING APPARATUS FOR CAPTURING INVISIBLE LIGHT IMAGE, IMAGE PROCESSING METHOD, AND IMAGE CAPTURE APPARATUS

Non-Final OA §103
Filed
Dec 12, 2022
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
3 (Non-Final)
45%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-8 and 10-28 are pending in this application. Claims 1-8 and 10-28 are being given the foreign priority date of 12/17/2021. Claims 1-8 and 10-5 have been amended, and claims 26-28 have been added. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/12/2022 and 08/01/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/07/2026 has been entered. Response to Arguments Claim objections Applicant’s arguments (see Remarks, filed 01/07/2025) have been fully considered by the examiner and in view of the amendments made to the claims, the examiner agrees to withdraw the claim objections. 35 U.S.C. 103 Applicant’s arguments (see Remarks, filed 01/07/2025) have been fully considered by the examiner and are not persuasive. Applicant argues (see Pages 11 and 12) that the examiner has not met the burden of establishing a prima facie case for obviousness because Yasushi and Lu, whether taken alone or in combinate, fail to teach determination of setting for capturing a visible light image based on both (1) a first target value based on luminance of a visible light image and (2) the second target value based on contrast of an invisible image as recited by claim 1, 20, 22, 23 and 28. The examiner disagrees, Lu teaches that the system may take luminance and chrominance data determined from a visible light image and use this for generating an enhanced image, which is a visible light image, this image and its data which includes the luminance data, is used to generate two exposure parameters (target values) for image capturing of an invisible light image (See Lu, [0079] and [0083]-[0085). Further, in generating the other of the two exposure parameters may be generated using contrast information for the luminance/ invisible light image (See Lu, [0087]- [0090]). These teachings of Lu would be understood as being analogous to the claimed subject matter of claims 1, 20, 22, 23, and 28 by one of ordinary skill in the art. PNG media_image1.png 386 334 media_image1.png Greyscale PNG media_image2.png 778 328 media_image2.png Greyscale (Lu, [0079]) PNG media_image3.png 454 338 media_image3.png Greyscale (Lu, [0083]-[0090]) Applicant further argues that the prior art of reference fails to teach determining a target value using weighted addition of a first and second target value. The examiner disagrees, Yasushi teaches in [0046]-[0047] and [0052]-[0054] that photometric target values are set for the visible and invisible light values, and these are weighted using a weighted mean to determine a brightness evaluation value (additional target value). Where the brightness evaluation value is a target brightness, generated from the visible light signal average, which is weighted average of the luminance (Y signals) of the visible light image (See Yasushi, [0030], and [0034]). Since the brightness evaluation value is a weighted average of the target photometric target values, it inherently has a step of generating a weighted sum, given that the weighted average is the weighted sum divided by the weights. Therefore, for at least the reasons above, the combination of Yasushi and Lu would have been understood as teaching analogous subject by one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The examiner respectfully maintains the rejections made over Yasushi and in view of Lu for the aforementioned reasons. PNG media_image4.png 168 584 media_image4.png Greyscale (Yasushi, [0030]) PNG media_image5.png 120 578 media_image5.png Greyscale (Yasushi [0034]) PNG media_image6.png 312 576 media_image6.png Greyscale (Yasushi, [0046]-[0048]) PNG media_image7.png 446 592 media_image7.png Greyscale (Yasushi, [0052]-[0054]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8, 12-25 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Yasushi (JP 2017157902 A, Published 09-07-2017) in view of Lu (US 20200221037 A1). Regarding claim 1 Yasushi discloses; An image processing apparatus that determines settings for capturing a first invisible light image to be combined with a first visible light image, the image processing apparatus comprising (Yasushi, [0055] the system determines capturing conditions and adjusts the setting for capturing the images (first images) based upon the value extracted from the brightness evaluation values): one or more processors that execute a program stored in a memory and thereby function as (Yasushi, [0101] the processing method may be a computer executing code from a memory, which would include a processor being used): PNG media_image8.png 164 720 media_image8.png Greyscale (Yasushi, [0101]) [a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured, wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image] a second determination unit (Yasushi, [0045] the visible light image capturing condition update unit is performing the setting of the target values) configured to determine the settings for capturing the first invisible light image based on the final target value (Yasushi, [0055] the capturing condition holding unit adjusts the setting for capturing the images (first images) based upon the value extracted from the brightness evaluation values); [ and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit.] Yasushi does not teach; a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured, wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image; and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit. however, in the same field of endeavor Lu teaches; a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, (Lu, [0125] – [0127] the parameter determination module may use enhanced image (second visible light image, generated using a first visible light image) to determine an exposure parameter (first target value) using luminance data, [0031] an enhance color image (second visible light image) is determined using luminance and color image data, and the luminance value of this is further used in processing, [0131] the exposure parameter (target value) determined is the signal parameter target used when capturing the luminance image) and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured (Lu, [0088] an additional exposure parameter (second target value) may be determined from contrast information from the luminance image data of an enhanced image (second luminance image/luminance data of a second image), wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image (Lu, Figure 5, the enhanced image (second visible image) and parameters thereof are determined using the color image data (visible light image) and the luminance image data (invisible light image data), [0098] the system generated multiple enhanced images, visible light and invisible light images because the process repeats, therefore the process is the same for a first and a second set of images, and updates iteratively to continuously improve enhanced images); and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit (Lu, [0075] the image processing module (adjustment unit) can adjust the white balance, sharpening and other image features of the color image data and the luminance image data (first visible and invisible light images respectively), [0083] exposure of the images is adjusted based on resolution, contrast and luminance data (brightness)). The combination of Yasushi and Lu would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of adjusting image capture settings based on brightness values to optimize image quality, it does not however teach a calculation of contrast or taking contrast into account in this. Lu teaches a method of enhancing an image and adjusting the exposure setting using contrast, luminance and brightness values from both invisible and visible light data. The motivation for the combination lies in that Lu’s method takes contrast and luminance into account, in addition to brightness, which can help with determining the optimal exposure for the image. (Lu, [0003]- [0017]) Regarding claim 2 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, the one or more processors further function as: a combining unit configured to combine the first invisible light image (Yasushi, [0001] details that the device is configured to combine the signals of an invisible and visible light image (first images), [0006] the system iteratively and repeatedly captures these images, so the process is the same for a first and a second and so on set of images), which has been captured according to with the settings determined by the fourth determination unit (Yasushi, [0055] the capturing condition holding unit adjusts the setting for capturing the images based upon the value extracted from the brightness evaluation values), with the first visible light image (Yasushi, [0001] details that the device is configured to combine the signals of an invisible and visible light image signals). Regarding claim 3 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, the one or more processors further function as: a region determination unit (Yasushi, [0006] Mixed region detection unit) configured to determine, in the first visible light image, a target region where the first invisible light image captured according to the settings determined by the second determination unit is to be combined (Yasushi, [0006] the mixed region detection unit detects an area in the image where the visible and invisible light signals mix for multiple repeatedly captured visible and invisible light images [0011] the regions is detected in the visible light image signal, [0014] a mixing ratio for the two signals is determined to generate a new ratio of the two signals to be mixed). Regarding claim 4 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 3, wherein the region determination unit (Yasushi, [0031] Mixing unit breaks the image into regions) determines the target region based on contrast of the second visible light image and contrast of the second invisible light image that has been captured together with the second visible light image (Yasushi, [0031] the mixing unit will divide the images into multiple regions, mixing of the infrared and visible light signals are performed based upon luminance and color values (tone information) from the images or regions in the images [0034] the signal averages are determined for each block for the visible and invisible light signals, since the average signal is determined the signal Examiner is interpreting this as being analogous to the signal intensity per region, which is used to compute contrast (average of the intensity), further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). Regarding claim 5 Yasushi and Lu teaches; The image processing apparatus according to claim 4, wherein the region determination unit determines the target region from a plurality of regions produced by dividing the second visible light image (Yasushi, [0031] Mixing unit breaks the image into multiple regions, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images), and an evaluation value of edge intensity in each block is used as the contrast (Yasushi, [0031] the mixing unit will divide the images into multiple regions, mixing of the infrared and visible light signals are performed based upon luminance and color values (tone information) from the images or regions in the images [0034] the signal averages are determined for each block for the visible and invisible light signals, since the average signal is determined the signal Examiner is interpreting this as being analogous to the signal intensity per region, which is used to compute contrast ( average of the intensity)). PNG media_image9.png 418 732 media_image9.png Greyscale (Yasushi, [0031]) PNG media_image10.png 140 738 media_image10.png Greyscale (Yasushi, [0034]) Regarding claim 6 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 5, wherein the region determination unit (Yasushi, [0031] Mixing unit) determines regions, out of the plurality of the regions of the second visible light image (Yasushi, [0031] Mixing unit breaks the image into regions, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images), having the evaluation value not greater than a first threshold (Yasushi, [0013] the regions may very that the signal in the image is smaller than a preset threshold, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images) and a difference with the evaluation value of a corresponding region in the second invisible light image is not less than a second threshold, as the target region (Yasushi, [0012] the mixed region detection unit verified that the signal is larger than a predetermined threshold region, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). PNG media_image11.png 142 734 media_image11.png Greyscale (Yasushi, [0013]) PNG media_image12.png 138 716 media_image12.png Greyscale (Yasushi, [0012]) Regarding claim 8 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 5, wherein the region determination unit determines regions selected by a user out of the plurality of regions of the second visible light image as the target region (Yasushi, [0067] A region can be detected by the user and selected out of the blocks, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). PNG media_image13.png 236 744 media_image13.png Greyscale ([0067] Yasushi) Regarding claim 12 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, wherein the second invisible image has been second visible light image (Yasushi, [0053] an Infrared light target value (second target value) is set, which is an invisible light signal, value pertains to the brightness which is a contrast adjustment value, [0045] the visible light image capturing condition update unit is performing the setting of the target values, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). Regarding claim 13 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 12, wherein (Yasushi, [0045] the visible light image capturing condition update unit is performing the setting of the target values), the second target value is determined as a target value of a signal level for obtaining the first invisible light image with predetermined contrast (Yasushi, [0053] an Infrared light target value is set, which is an invisible light signal, value pertains to the brightness which is a contrast adjustment value, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). Regarding claim 14 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, wherein the first determination unit determines the final target value based on a result of weighted addition of the first target value and the second target value (Yasushi, [0046] a weight is set for a plurality of regions (at least a first and a second region), then a weighted average is generated based upon the brightness to get a brightness evaluation value (final value), since a weighted average is performed, there must be a step of weighted addition to obtain the final value). PNG media_image14.png 162 736 media_image14.png Greyscale (Yasushi, [0046]) Regarding claim 15 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 14, wherein the first determination unit determines a weight of the first target value and a weight of the second target value based on luminance of regions (Yasushi, [0030-0032] the color difference signal generation unit determines the luminance value of multiple regions of the visible light image which are the visible light signals (Y values/Luminance values), [0046]- [0048] the visible light signals (based on luminance as described previously) are used to determine weights of target values for the visible light image), in the second visible light image (Yasushi, [0052] an infrared light image is captured using similarly determined conditions to be combined with the visible light image in which it is captured with, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images), where the first invisible light image captured according to the settings is to be combined (Yasushi, [0052] an infrared light image is captured using similarly determined conditions to be combined with the visible light image in which it is captured with, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). Regarding claim 16 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, wherein in a case where the first invisible light image to be used in combining is captured with settings for capturing the first visible light image (Yasushi, [0085] the fourth embodiment pertains the conditions for the visible and invisible light being the same, rather than 2 separate conditions as in previous conditions, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images), instead the settings determined by the second determination unit, the second determination unit determines a gain to be applied to the first invisible light image captured according to the settings for capturing the first visible light image (Yasushi, [0094] the infrared light image signal amplification unit adjusts the level of the light in the image based upon the gain), based on the target value (Yasushi, [0094], the gain is used as well as outputs from the condition determination unit which includes the target values determined based upon the visible and invisible light conditions). PNG media_image15.png 158 746 media_image15.png Greyscale PNG media_image16.png 98 726 media_image16.png Greyscale (Yasushi, [0084]) PNG media_image17.png 186 734 media_image17.png Greyscale (Yasushi, [0094]) Regarding claim 17 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 16, wherein the second determination unit determines the gain so that the gain does not change non-continuously at a boundary between adjacent regions out of the plurality of regions of the first visible light image (Yasushi, [0031] the processing condition generation unit divides the data into regions/blocks, [0093]-[0094] the signal levels are adjusted based upon the gain in the regions out put the from the processing condition generation unit, therefore the gain is determined for each region/block in the image, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). PNG media_image18.png 324 742 media_image18.png Greyscale (Yasushi, [0093]- [0094]) Regarding claim 18 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 1, wherein in a case where a limitation applied to settings for capturing the first invisible light image and it is not possible to determine the settings for capturing the first invisible light image according to the final target value under the limitation (Yasushi, [0091] the imaging signals/capture conditions are determined based only exposure not the additional capture settings, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images), the second determination unit determines the settings for capturing the first invisible light image under the limitation and also determines a gain for compensating for insufficient exposure (Yasushi, [0095] the gain is determined to control the exposure of the signal, this is done via the signal amplification unit). PNG media_image19.png 186 734 media_image19.png Greyscale (Yasushi, [0095]) Regarding claim 19 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 18, the one or more processors further function as (Yasushi, [0101] the processing method may be a computer executing code from a memory, which would include a processor being used): an applying unit configured to apply the gain to the first invisible light image captured according to the settings under the limitation (Yasushi, [0094] the gain for the infrared light image is used to adjust the level of light in the image), wherein the first invisible light image to which the gain has been applied is used for combining (Yasushi, [0020] the visible and invisible light images are combined, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images). Regarding claim 20 the combination of Yasushi and Lu teaches; An image capture apparatus comprising: an image sensor capable of capturing a visible light image and an invisible light image (Yasushi, [0022], the imaging element generate a visible and an infrared light signal by imaging a subject); and an image processing apparatus that determines settings for capturing (Yasushi, [0024] the imaging condition are generated based upon signal level), by the image sensor (Yasushi, [0022], the imaging element generate a visible and an infrared light signal by imaging a subject), an invisible light image to be combined with a visible light image (Yasushi, [0023] the visible light and infrared signal are combined to create one image), wherein the image processing apparatus comprises one or more processors that execute a program stored in a memory and thereby function as (Yasushi, [0101] the processing method may be a computer executing code from a memory, which would include a processor being used): PNG media_image8.png 164 720 media_image8.png Greyscale (Yasushi, [0101]) a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, (Lu, [0125] – [0127] the parameter determination module may use enhanced image (second visible light image, generated using a first visible light image) to determine an exposure parameter (first target value) using luminance data, [0031] an enhance color image (second visible light image) is determined using luminance and color image data, and the luminance value of this is further used in processing, [0131] the exposure parameter (target value) determined is the signal parameter target used when capturing the luminance image) and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured (Lu, [0088] an additional exposure parameter (second target value) may be determined from contrast information from the luminance image data of an enhanced image (second luminance image/luminance data of a second image), wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image (Lu, Figure 5, the enhanced image (second visible image) and parameters thereof are determined using the color image data (visible light image) and the luminance image data (invisible light image data), [0098] the system generated multiple enhanced images, visible light and invisible light images because the process repeats, therefore the process is the same for a first and a second set of images, and updates iteratively to continuously improve enhanced images); a second determination unit (Yasushi, [0045] the visible light image capturing condition update unit is performing the setting of the target values) configured to determine the settings for capturing the first invisible light image based on the final target value (Yasushi, [0055] the capturing condition holding unit adjusts the setting for capturing the images based upon the value extracted from the brightness evaluation values), and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit (Lu, [0075] the image processing module (adjustment unit) can adjust the white balance, sharpening and other image features of the color image data and the luminance image data (first visible and invisible light images respectively), [0083] exposure of the images is adjusted based on resolution, contrast and luminance data (brightness)). The combination of Yasushi and Lu would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of adjusting image capture settings based on brightness values to optimize image quality, it does not however teach a calculation of contrast or taking contrast into account in this. Lu teaches a method of enhancing an image and adjusting the exposure setting using contrast, luminance and brightness values from both invisible and visible light data. The motivation for the combination lies in that Lu’s method takes contrast and luminance into account, in addition to brightness, which can help with determining the optimal exposure for the image. (Lu, [0003]- [0017]) Regarding claim 21 the combination of Yasushi and Lu teaches; The image capture apparatus according to claim 20, the one or more processors further function as: a recording unit configured to record:the first invisible light image to be combined with the first visible light image (Yasushi, [0006] a visible and invisible light image are capture to be mixed, further this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images (first set of images), therefore the processes done for the second images are the same as those done for the first, and the parameters determined are determined using the previous (first) set of images); and information which relates to regions of the first visible light image where the first invisible light image is to be combined, in association with the first visible light image (Yasushi, [0006] a visible and invisible light image are capture to be mixed, [0031] the images are divided into regions to determine mixing of the signals). Regarding claim 22 the combination of Yasushi and Lu teaches; An image processing method that is executed by an image processing apparatus and determines settings for capturing an invisible light image to be combined with a visible light image, the image processing method comprising: determining a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, (Lu, [0125] – [0127] the parameter determination module may use enhanced image (second visible light image, generated using a first visible light image) to determine an exposure parameter (first target value) using luminance data, [0031] an enhance color image (second visible light image) is determined using luminance and color image data, and the luminance value of this is further used in processing, [0131] the exposure parameter (target value) determined is the signal parameter target used when capturing the luminance image) and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured (Lu, [0088] an additional exposure parameter (second target value) may be determined from contrast information from the luminance image data of an enhanced image (second luminance image/luminance data of a second image), wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image (Lu, Figure 5, the enhanced image (second visible image) and parameters thereof are determined using the color image data (visible light image) and the luminance image data (invisible light image data), [0098] the system generated multiple enhanced images, visible light and invisible light images because the process repeats, therefore the process is the same for a first and a second set of images, and updates iteratively to continuously improve enhanced images); determining the settings for capturing the first invisible light image based on the final target value;(Yasushi, [0055] the capturing condition holding unit adjusts the setting for capturing the images based upon the value extracted from the brightness evaluation values) and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit (Lu, [0075] the image processing module (adjustment unit) can adjust the white balance, sharpening and other image features of the color image data and the luminance image data (first visible and invisible light images respectively), [0083] exposure of the images is adjusted based on resolution, contrast and luminance data (brightness)). The combination of Yasushi and Lu would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of adjusting image capture settings based on brightness values to optimize image quality, it does not however teach a calculation of contrast or taking contrast into account in this. Lu teaches a method of enhancing an image and adjusting the exposure setting using contrast, luminance and brightness values from both invisible and visible light data. The motivation for the combination lies in that Lu’s method takes contrast and luminance into account, in addition to brightness, which can help with determining the optimal exposure for the image. (Lu, [0003]- [0017]) Regarding claim 23 the combination of Yasushi and Lu teaches; A computer-readable medium that stores a program for causing a computer to function as an image processing apparatus that determines settings for capturing a first invisible light image to be combined with a first visible light image, the image processing apparatus comprising (Yasushi, [0101] the processing method may be a computer executing code from a memory, which would include a processor being used): a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, (Lu, [0125] – [0127] the parameter determination module may use enhanced image (second visible light image, generated using a first visible light image) to determine an exposure parameter (first target value) using luminance data, [0031] an enhance color image (second visible light image) is determined using luminance and color image data, and the luminance value of this is further used in processing, [0131] the exposure parameter (target value) determined is the signal parameter target used when capturing the luminance image) and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured (Lu, [0088] an additional exposure parameter (second target value) may be determined from contrast information from the luminance image data of an enhanced image (second luminance image/luminance data of a second image), wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image (Lu, Figure 5, the enhanced image (second visible image) and parameters thereof are determined using the color image data (visible light image) and the luminance image data (invisible light image data), [0098] the system generated multiple enhanced images, visible light and invisible light images because the process repeats, therefore the process is the same for a first and a second set of images, and updates iteratively to continuously improve enhanced images); a second determination unit (Yasushi, [0045] the visible light image capturing condition update unit is performing the setting of the target values) configured to determine the settings for capturing the first invisible light image based on the final target value (Yasushi, [0055] the capturing condition holding unit adjusts the setting for capturing the images based upon the value extracted from the brightness evaluation values); and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit (Lu, [0075] the image processing module (adjustment unit) can adjust the white balance, sharpening and other image features of the color image data and the luminance image data (first visible and invisible light images respectively), [0083] exposure of the images is adjusted based on resolution, contrast and luminance data (brightness)). The combination of Yasushi and Lu would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of adjusting image capture settings based on brightness values to optimize image quality, it does not however teach a calculation of contrast or taking contrast into account in this. Lu teaches a method of enhancing an image and adjusting the exposure setting using contrast, luminance and brightness values from both invisible and visible light data. The motivation for the combination lies in that Lu’s method takes contrast and luminance into account, in addition to brightness, which can help with determining the optimal exposure for the image. (Lu, [0003]- [0017]) Regarding claim 24 the combination of Yasushi and Lu teaches; the image processing apparatus according to claim 1, wherein the luminance second visible light image (Yasushi, [0030] the luminance signals are generated (Y signals) [0034] the visible light image signal is averaged, which is an average of the Y signals, therefore the luminance is averaged, further, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images, therefore the processes done for the second images are the same as those done for the first). Regarding claim 25 the combination of Yasushi and Lu teaches; the image processing apparatus according to claim 1, wherein the first determination unit determines the final target value for each of two or more regions of the second visible light image and the second invisible light image (Yasushi, [0047] the target value is set for the visible light image [0048] the values are set and determined and compared for each of the regions in the visible light image, [0053] the target value is set for the infrared image, [0054] the target values are determined and compared for each region in the invisible/infrared light image, [0006] teaches that this process is done iteratively for multiple pairs of visible and invisible light images, where the image capturing conditions for the next set of images, is determined using the previous set of images, therefore the processes done for the second images are the same as those done for the first), and wherein the second determination unit determines the settings for each of the two or more regions based on the respective target values (Yasushi, [0049] the capturing conditions are determined based on the evaluation/target values [0055] the infrared capturing condition is determined based on these target values/evaluation values). Regarding claim 28 the combination of Yasushi and Lu teaches; An image processing apparatus that determines settings for capturing a first invisible light image to be combined with a first visible light image, the image processing apparatus comprising: one or more processors that execute a program stored in a memory and thereby function as (Yasushi, [0101] the processing method may be a computer executing code from a memory, which would include a processor being used): a first determination unit configured to determine a final target value for a signal level of the first invisible image, based on (i) a first target value having one of different values determined based on a magnitude relationship between luminance a second visible light image that has been captured and a predetermined threshold value, (Lu, [0125] – [0127] the parameter determination module may use enhanced image (second visible light image, generated using a first visible light image) to determine an exposure parameter (first target value) using luminance data, [0031] an enhance color image (second visible light image) is determined using luminance and color image data, and the luminance value of this is further used in processing, [0131] the exposure parameter (target value) determined is the signal parameter target used when capturing the luminance image) and (ii) a second target values determined based on contrast of a corresponding region of a second invisible light image that has been captured (Lu, [0088] an additional exposure parameter (second target value) may be determined from contrast information from the luminance image data of an enhanced image (second luminance image/luminance data of a second image), wherein the luminance of the second visible image and the contrast of the corresponding luminance of the second visible image are obtained for one or more regions of the second visible light image and the second invisible light image (Lu, Figure 5, the enhanced image (second visible image) and parameters thereof are determined using the color image data (visible light image) and the luminance image data (invisible light image data), [0098] the system generated multiple enhanced images, visible light and invisible light images because the process repeats, therefore the process is the same for a first and a second set of images, and updates iteratively to continuously improve enhanced images); a second determination unit configured to determine settings for capturing the first invisible light image based on the target value (Yasushi, [0045] the visible light image capturing condition update unit is performing the setting of the target values, [0055] the capturing condition holding unit adjusts the setting for capturing the images based upon the value extracted from the brightness evaluation values); and an adjustment unit configured to adjust brightness and contrast of the first visible light image based on the first invisible light image captured according to the settings determined by the second determination unit (Lu, [0075] the image processing module (adjustment unit) can adjust the white balance, sharpening and other image features of the color image data and the luminance image data (first visible and invisible light images respectively), [0083] exposure of the images is adjusted based on resolution, contrast and luminance data (brightness), wherein the first determination unit determines the target value based on a result of weighted addition of a first target value determined based on the luminance of the region of the second visible light image and a second target value determined based on the contrast of the second invisible-light image, and, wherein the weight of the first target value and the weight of the second target value are determined based on the luminance of a region of the second visible light image (Yasushi, [0046] a weight is set for a plurality of regions (at least a first and a second region), then a weighted average is generated based upon the brightness to get a brightness evaluation value (final value), since a weighted average is performed, there must be a step of weighted addition to obtain the final value). The combination of Yasushi and Lu would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of adjusting image capture settings based on brightness values to optimize image quality, it does not however teach a calculation of contrast or taking contrast into account in this. Lu teaches a method of enhancing an image and adjusting the exposure setting using contrast, luminance and brightness values from both invisible and visible light data. The motivation for the combination lies in that Lu’s method takes contrast and luminance into account, in addition to brightness, which can help with determining the optimal exposure for the image. (Lu, [0003]- [0017]) 2. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yasushi (JP 2017157902 A, Published 09-07-2017) in view of Lu (US 20200221037 A1) and in further view of Pu (US 20210034881 A1). Regarding claim 7 the combination of Yasushi and Lu teaches; The image processing apparatus according to claim 5, wherein the region determination unit determines regions (Yasushi, [0031] Mixing unit breaks the image into regions), out of the plurality of regions of produced by dividing the second visible light image (Yasushi, [0031] Mixing unit breaks the image into regions), where a subject distance is not less than a distance threshold, as the target region. The combination of Yasushi and Lu teaches does not disclose; where a subject distance is not less than a distance threshold, as the target region. However, in the same field of endeavor Pu discloses; where a subject distance is not less than a distance threshold, as the target region (Pu, [0106] and [0113] the object can be determined using a preset pixel distance, therefore if the pixel distance is present, the detected distance of that object would have to be greater than the pixel threshold for identification). The combination of Yasushi, Lu, and Pu would have been obvious to one ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the ability to determine an object’s distance in an image of Pu would allow the system of Yasushi to determine with confidence the location of objects in both visible and invisible light images/ (Pu, [0106], [0113] and [0003]- [0004]). 3. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Yasushi (JP 2017157902 A, Published 09-07-2017) in view of Lu (US 20200221037 A1) and in further view of Sun (20210044763 A1). Regarding claim 10 the combination of Yasushi and Lu fails to teach; The image processing apparatus according to claim 1, wherein the first target value is fixed at a first value in a case where luminance of the visible light image is less than a third threshold; and the first target value is fixed at a second value, which is larger than the first value, in a case where the luminance of the visible light image is greater than a fourth threshold. However, in the same field of endeavor of visible and invisible light image fusion, Sun teaches; wherein the first target value is fixed at a first value in a case where luminance of the visible light image is less than a third threshold (Sun, [0070] a first weighting coefficient for the visible light image is generated for a case in which the illuminance (luminance) is between a set threshold)); and the first target value is fixed at a second value, which is larger than the first value, in a case where the luminance of the visible light image is greater than a fourth threshold (Sun, [0071] a second weighting coefficient is determined for the visible light image for a case where the haze (condition of low illuminance/luminance, therefore is based on luminance) is between a certain threshold (fourth threshold)). The combination of Yasushi, Lu and Sun would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Yasushi and Lu both teach image fusion of visible and invisible light images for image enhancement, but do not teach the use of weights based on luminance thresholds to accomplish this. Sun, in the same field of endeavor, teaches this limitation. The motivation for the combination lies in that the addition of the weighting the target values based on the luminance allows for a less distorted fused enhanced image and helps improve the signal-to-noise ratio. (Sun, [0005], [0022], [0037]- [0040], [0069]- [0077]) Regarding claim 11 the combination of Yasushi, Lu and Sun teaches; The image processing apparatus according to claim 10, wherein the first target value has a value corresponding to the luminance of the visible light image in a case where the luminance of the second visible light image is not less than the third threshold and not greater than the fourth threshold (Sun, [0070] a first weighting coefficient for the visible light image is generated for a case in which the illuminance (luminance) is between a set threshold), further, [0055]-[0057] describes the process as iterative, therefore the process would be the same for the first and second set of visible and invisible light images.). The combination of Yasushi, Lu and Sun would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Yasushi and Lu both teach image fusion of visible and invisible light images for image enhancement, but do not teach the use of weights based on luminance thresholds to accomplish this. Sun, in the same field of endeavor, teaches this limitation. The motivation for the combination lies in that the addition of the weighting the target values based on the luminance allows for a less distorted fused enhanced image and helps improve the signal-to-noise ratio. (Sun, [0005], [0022], [0037]- [0040], [0069]- [0077]) 4. Claims 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Yasushi (JP 2017157902 A, Published 09-07-2017) in view of Lu (US 20200221037 A1) and in further view of Miwa (20060268128 A1). Regarding claim 26 the combination of Yasushi and Lu fails to teach; The image processing apparatus according to claim 15, wherein the first determination unit determines the weights of the first target value larger than the second target value in a case where the luminance of the region of the second visible light image is less than a fifth threshold or greater than a sixth threshold. However, in the same field of endeavor, Miwa teaches; The image processing apparatus according to claim 15, wherein the first determination unit determines the weights of the first target value larger than the second target value in a case where the luminance of the region of the second visible light image is less than a fifth threshold or greater than a sixth threshold (Miwa, [0017] the mean value of the luminance is used to determine weights of a target pixel value, where [0021] when the first weight, W1 is largest at first when the mean value of the luminance for that region is within a set threshold, [0076]-[0078] the mean luminance value changes in distribution at different time point are used as the thresholds to determine the weightings, where if the luminance changes within a set range, the weights are adjusted. Since this occurs at several time points, this would analogous to a change using a plurality of thresholds (Thresholds 1-6) as interpreted by the examiner). The combination of Yasushi, Lu and Miwa would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of capturing multiple invisible and visible light images, and fusing them together to enhance the images, and to further update the capture conditions based on determined target values. Lu teaches a similar method of visible and invisible light image capture and enhancement. Neither Yasushi nor Lu teaches the claimed method of thresholding and weighting luminance values as claimed in claim 26. Miwa teaches this method in the same field of endeavor of luminance image detection and enhancement. The motivation for the combination lies in that adjusting the weight based on luminance value changes mitigates the effect of drastic changes in luminance values over frames, allowing for better image enhancement. (Miwa, [0004]- [0012]) Regarding claim 27 the combination of Yasushi, Lu and Miwa teaches; The image processing apparatus according to claim 15, wherein the first determination unit determines the weights of the first target value less than the second target value in a case where the luminance of the region of the second visible light image is not less than a fifth threshold and not greater than a sixth threshold (Miwa, [0017] the mean value of the luminance is used to determine weights of a target pixel value, where [0021] when the first weight, W1 is largest at first when the mean value of the luminance for that region is within a set threshold, further, the second weight become the largest when the pixel luminance value is within a different range, [0076]-[0078] the mean luminance value changes in distribution at different time point are used as the thresholds to determine the weightings, where if the luminance changes within a set range, the weights are adjusted. Since this occurs at several time points, this would analogous to a change using a plurality of thresholds (Thresholds 1-6) as interpreted by the examiner). The combination of Yasushi, Lu and Miwa would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Yasushi teaches a method of capturing multiple invisible and visible light images, and fusing them together to enhance the images, and to further update the capture conditions based on determined target values. Lu teaches a similar method of visible and invisible light image capture and enhancement. Neither Yasushi nor Lu teaches the claimed method of thresholding and weighting luminance values as claimed in claim 27. Miwa teaches this method in the same field of endeavor of luminance image detection and enhancement. The motivation for the combination lies in that adjusting the weight based on luminance value changes mitigates the effect of drastic changes in luminance values over frames, allowing for better image enhancement. (Miwa, [0004]- [0012]) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sun, US 20210044763 A1, teaches a method of capturing dual spectrum images. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Dec 12, 2022
Application Filed
May 14, 2025
Non-Final Rejection — §103
Jul 30, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Jan 07, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month