Prosecution Insights
Last updated: April 19, 2026
Application No. 18/296,498

METHOD AND ELECTRONIC SYSTEM FOR HIGH DYNAMIC RANGE (HDR) IMAGING

Final Rejection §103
Filed
Apr 06, 2023
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-19 were pending for examination in the Application No. 18/296,498 filed April 6th, 2023. In the remarks and amendments received on August 21st, 2025, claims 1, 3, 5-8, 10, and 15-18 are amended and claims 2, 4, 12, and 14 are canceled. Accordingly, claims 1, 3, 5-11, 13, and 15-19 are currently pending for examination in the application. Response to Amendment Applicant’s amendments filed August 21st, 2025, to the Claims have overcome each and every objection and 35 § U.S.C. 112 (b) rejections previously set forth in the Non-Final Office Action mailed May 22nd, 2025. Accordingly, the objections and 35 § U.S.C. 112 (b) rejections are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure. Response to Arguments Applicant’s arguments filed August 21st, 2025, regarding the rejection(s) of independent claim(s) 1 and 10 have been fully considered but are not persuasive. Amended claims 1 and 10 have been amended to incorporate subject matter of originally presented claims 2 and 4 and 12 and 14, respectively. Applicant’s assert that the cited references of Ng, Peng (CN), and Peng (US) do not disclose and/or reasonably teach amended claims 1 and 10 because the cited references “lack both the same fusion architecture and the logic-switching approach, and there is no technical teaching in any of them that would motivate a person skilled in the art to combine them to arrive at the claimed feature as recited in amended claims 1 and 10” (pgs. 13-14 of Applicant’s remarks). The examiner respectfully disagrees with Applicant’s assertion because the claims do not reflect Applicant’s assertion of a “logic-switching approach”. The examiner notes that, based on the claim limitations as written, the broadest interpretation of claims 1 and 10 do not recite switching or selecting between the two claimed different fusion conditions as asserted by Applicant but rather as two distinct paths. Amended claims 1 and 10 merely recite two instances for generating a “final image”, such that satisfying one instance results in the remaining instance to need not be reached. The two instances are generating the “final image” by “the color information and the detail information in the first image and parts of the detail information in the second image” when the “first image” is “not overexposed”, and generating the “final image” by “utilizing all of the detail information in the second image” when the “first image” is “overexposed”. Since Peng (CN) teaches generating the “final image” by “utilizing all of the detail information in the second image” when satisfying the instance that the “first image” is “overexposed” as detailed in the rejection of claim 1 below, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng to incorporate generating the “final image” as claimed when satisfying the instance that the “first image” is “overexposed” to compensate for the lack of details in overexposed areas in the first image (e.g., a color image) captured using a single exposure condition as taught by Peng (CN) (see the motivation to combine Ng and Peng (CN) in the rejection of claim 1 below). Additionally, since Peng (US) teaches further generating the “final image” by “…parts of the detail information in the second image” when the “first image” is “not overexposed” as detailed in the rejection of claim 1 below, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng in view of Peng (CN) to incorporate generating the “final image” as claimed when satisfying the instance that the “first image” is “not overexposed” to maximize details of both the first and second image to generate the final image with full details of an imaging scene as taught by Peng (US) (see the motivation to combine Ng in view of Peng (CN) and Peng (US) in the rejection of claim 1 below). Therefore, it would have been reasonable to combine Ng, Peng (CN), and Peng (US) to disclose the features of claims 1 and 10 as detailed in the current rejection below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP § 2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph): A. The Claim Limitation Uses the Term "Means" or "Step" or a Generic Placeholder (A Term That Is Simply A Substitute for "Means") With respect to the first prong of this analysis, a claim element that does not include the term "means" or "step" triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts. Such claim limitation(s) is/are: "a first sensor, configured to output a first image…" in claim 10 implemented on hardware disclosed in para. [0039] (e.g., "RGB sensor"); "a second sensor, configured to output a second image…" in claim 10 implemented on hardware disclosed in para. [0039] (e.g., "NIR sensor"); and "a light source, configured to emit a light within the second spectrum" in claim 11 implemented on hardware disclosed in paras. [0039-0040] (e.g., "NIR light…"). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 8-10, 13, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Ng et al. (Ng; US 2023/0267588 A1) in view of Peng et al. (Peng (CN); CN 114143419 A), and further in view of Peng et al. (Peng (US); US 2021/0217153 A1). Regarding claim 1, Ng discloses a method for high dynamic range (HDR) imaging, comprising: receiving a first image from a first sensor capable of sensing a first spectrum (paras. [0053-0054], recite(s) [0053] “Image Fusion is to combine information from different image sources into a compact form of image that contains more information than any single source image. In some embodiments, image fusion is based on different sensory modalities of the same camera or two distinct cameras, and the different sensory modalities contain different types of information, including color, brightness, and detail information. For example, color images (RGB) are fused with NIR images, e.g., using deep learning techniques, to incorporate details of the NIR images into the color images while preserving the color and brightness information of the color images. A fused image incorporates more details from a corresponding NIR image and has a similar RGB look to a corresponding color image. Various embodiments of this application can achieve a high dynamic range (HDR) in a radiance domain, optimize amount of details incorporated from the NIR images, prevent a see-through effect, preserve color of the color images, and dehaze the color or fused images. As such, these embodiments can be widely used for different applications including, but not limited to, autonomous driving and visual surveillance applications.” [0054] “FIG. 5 is an example framework 500 of fusing an RGB image 502 and an NIR image 504, in accordance with some embodiments. The RGB image 502 and NIR image 504 are captured simultaneously in a scene by a camera or two distinct cameras (specifically, by an NIR image sensor and a visible light image sensor of the same camera or two distinct cameras).” , where the “RGB image” is a first image); receiving a second image from a second sensor capable of sensing a second spectrum, wherein the second spectrum has a higher wavelength range as compared to the first spectrum (paras. [0053-0054]—see citations above—, where the “NIR [near-infrared] image” is a second image); retrieving a first image feature from the first image and a second image feature from the second image (para. [0053]—see citation above—, where the “color, brightness, and detail information” are a first and second image features (e.g., “incorporate details of the NIR images into the color images while preserving the color and brightness information of the color images”, where the “details” is at least a second image feature and the color and/or brightness is a first image feature); and fusing the first and second images by referencing the first image feature and the second image feature to generate a final image (para. [0053]—see citation above—, particularly recite(s): [0053] “…For example, color images (RGB) are fused with NIR images… to incorporate details of the NIR images into the color images while preserving the color and brightness information of the color images…” ); wherein the first image feature and the second image feature comprise color information, brightness information, and detail information (para. [0053]—see citation in claim 1 above—, particularly recite(s): [0053] “…image fusion is based on different sensory modalities of the same camera or two distinct cameras, and the different sensory modalities contain different types of information, including color, brightness, and detail information. For example, color images (RGB) are fused with NIR images, e.g., using deep learning techniques, to incorporate details of the NIR images into the color images while preserving the color and brightness information of the color images…” ); wherein the step of fusing the first image and the second image comprises: referencing the color information and the detail information in the first image and parts of the detail information in the second image to generate the final image(para. [0055], recite(s) [0055] “…In an example, a guided image filter is applied to decompose the first RGB image 502′ and/or the first NIR image 504′. A weighted combination 512 of the NIR base portion, RGB base portion, NIR detail portion and RGB detail portion is generated using a set of weights. Each weight is manipulated to control how much of a respective portion is incorporated into the combination. Particularly, a weight corresponding to the NIR base portion is controlled (514) to determine how much of detail information of the first NIR image 514′ is utilized…” , where using a “set of weights” to “control how much of a respective portion is incorporated” into the generated final image (i.e., fused image) is referencing the color information (e.g., “RGB base portion”) and the detail information (e.g., “RGB detail portion”) in the first image and at least parts of the detail information (e.g., weighted “NIR detail portion”) in the second image to generate the final image); and utilizing all of the detail information in the second image to generate the final image(para. [0055]—see citation above—, where utilizing the “NIR detail portion” is utilizing all of the detail information in the second image). Where Ng does not specifically disclose utilizing all of the detail information in the second image to generate the final image if the brightness information of the first image indicates that the first image is overexposed; Peng (CN) teaches in the same field of endeavor of generating a final image by fusing first and second images utilizing all of the detail information in the second image to generate the final image if the brightness information of the first image indicates that the first image is overexposed (description, para. [n0051], recite(s) [n0051] “In detail, since the color sensor 32 can only use a single exposure condition to obtain a color image each time, when the camera scene is low light or high contrast, each color image may have high noise, overexposed or underexposed areas (i.e., the defective areas mentioned above). At this time, the processor 38 can use the high light sensitivity of the infrared sensor 34 to select an infrared image with texture details of the defective area from the multiple infrared images previously acquired, which can be used to supplement the texture details of the defective area in the color image.” , where utilizing detail of an NIR image for the overexposed areas is utilizing all of the detail information of a second image (e.g., an “infrared image”) if the brightness information of the first image (e.g., “color image”) indicates that the color image is overexposed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng to incorporate utilizing all of the detail information in the second image to generate the final image if the brightness information of the first image indicates that the first image is overexposed to compensate for the lack of details in overexposed areas in a first image (e.g., color image) captured using a single exposure condition as taught by Peng (CN) (description, para. [n0051]—see citation above). Where Ng in view of Peng (CN) does not specifically disclose referencing… parts of the detail information in the second image to generate the final image if the brightness information of the first image indicates that the first image is not overexposed; Peng (US) teaches in the same field of endeavor of generating a final image by fusing first and second images referencing… parts of the detail information in the second image to generate the final image if the brightness information of the first image indicates that the first image is not overexposed (paras. [0040-0042], recite(s) [0040] “In Step S510, the processor 38 selects the color image having the SNR difference less than the SNR threshold and having the luminance mean value greater than the luminance threshold and the corresponding IR image t to execute the feature domain transformation, so as to extract partial details of the imaging scene.” [0041] “In Step S512, the processor 38 fuses the selected color image and IR image to adjust partial details of the color image according to the guidance of partial details of the IR image, so as to obtain a scene image with full details of the imaging scene. The implementation of the above Steps S506 to S512 is the same or similar to Steps S404 to S410 of the previous embodiment, so the details are not repeated here.” [0042] “By the above method, even in the late night or low light source scene, the dual sensor imaging system 30 can capture and select the color image and the IR image with appropriate exposure and noise within the allowable range for fusion, so as to maximize the details of the captured image and improve the image quality.” , where utilizing “partial details of the IR image” for at least a “color image” selected to have the “appropriate exposure” is only referencing parts (i.e., “partial”) of the detail information of the second image (e.g., “IR image”) if the brightness information (e.g., exposure or “luminance threshold”) of the first images (e.g., “color image”) indicates that the first image is not overexposed (e.g., “appropriate exposure”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng in view of Peng (CN) to incorporate referencing at least parts of the detail information in the second image to generate the final image if the brightness information of the first images indicates that the first image is not overexposed to maximize details of both the first and second image to generate a final image with full details of an imaging scene as taught by Peng (US) (paras. [0040-0042]—see citations above). Regarding claim 3, Ng, as modified by Peng (CN) and Peng (US), discloses the method as claimed in claim 1, wherein Ng further discloses the step of fusing the first image and the second image further comprises: referencing the color information of the first image and referencing the detail information in the second image to generate the final image (para. [0053]—see citation in claim 1 above—, particularly recite(s): [0053] “…For example, color images (RGB) are fused with NIR images… to incorporate details of the NIR images into the color images while preserving the color and brightness information of the color images…” , where “color… information of the color images” is color information of the first image and “details of the NIR images” is detail information in the second image). Regarding claim 8, Ng, as modified by Peng (CN) and Peng (US), discloses the method as claimed in claim 1, wherein Peng (US) further teaches the detail information comprises profiles, textures and edge sharpness (para. [0029]—see citation in claim 1 above—, discloses [0029] “In Step S410, the processor 38 fuses the selected color image and IR image to adjust partial details of the color image according to a guidance of partial details of the IR image, so as to obtain a scene image with full details of the imaging scene. In some embodiment, when the processor 38 fuses the color image and the IR image, the processor 38, for example, uses the guidance of the texture details and/or edge details of the IR image to enhance the color details in the color image. Finally, the scene image with full color, texture, and edge details of the imaging scene is obtained.” ). Regarding claim 9, Ng, as modified by Peng (CN) and Peng (US), discloses the method as claimed in claim 1, wherein Ng further discloses the method as claimed in claim 1 further comprising: performing image alignment on the first image and the second image before retrieving the first image feature and the second image feature (para. [0028], recite(s) [0028] “The present application is directed to combining information of a plurality of images by different mechanisms and applying additional pre-processing and post-processing to improve an image quality of a resulting fused image. …Prior to any fusion process, the RGB and NIR images can be aligned locally and iteratively using an image registration operation…” , where “image registration” or alignment being an “additional pre-processing” prior to any fusion process is performing image alignment prior to retrieving the first and second image features). Regarding claim 10, the claim recites similar limitations to claim 1 but in the form of an electronic system, comprising: …a processor, configured to perform the method of claim 1. Ng discloses said processor (para. [0025], recite(s) [0025] “…an image fusion method is implemented at a computer system (e.g., a server, an electronic device having a camera, or both of them) having one or more processors and memory…” ). Therefore, claim 10 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 13, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Regarding claim 18, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above). Regarding claim 19, the claim recites similar limitations to claim 9 and is rejected for similar rationale and reasoning (see the analysis for claim 9 above). Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ng, as modified by Peng (CN) and Peng (US), as applied to claims 1 and 11 above, and further in view of Alsheuski (WO 2023/161519 A1), and further more in view of Wan et al. (Wan; CN 106982327 B). Regarding claim 5, Ng, as modified by Peng (CN) and Peng (US), discloses the method as claimed in claim 1, wherein the step of fusing the first image and the second image comprises: changing weightings of the detail information in the first image and the detail information in the second image (para. [0055], recite(s) [0055] “…In an example, a guided image filter is applied to decompose the first RGB image 502′ and/or the first NIR image 504′. A weighted combination 512 of the NIR base portion, RGB base portion, NIR detail portion and RGB detail portion is generated using a set of weights. Each weight is manipulated to control how much of a respective portion is incorporated into the combination. Particularly, a weight corresponding to the NIR base portion is controlled (514) to determine how much of detail information of the first NIR image 514′ is utilized. The weighted combination 512 in the radiance domain is converted (516) to a first fused image 518 in an image domain (also called “pixel domain”)…” , where “manipulat[ing]” each weight in the “set of weights” for at least a “RGB detail portion” and “NIR detail portion” is changing weightings of the detail information of the first image (e.g., “RGB detail portion”) and the detail information in the second image (e.g., “NIR detail portion”)). Where Ng, as modified by Peng (CN) and Peng (US), does not specifically disclose comparing the brightness information of the first image changing weightings of the detail information… based on the comparing of the brightness information of the first image(of) the brightness information of the first image; Alsheuski teaches in the same field of endeavor of weighting detail information of at least a first image and a second image comparing the brightness information of the first image changing weightings of the detail information… based on the comparing of the brightness information of the first image(of) the brightness information of the first image (lines 17-24 of pg. 7 and lines 15-27 of pg. 21 to lines 1-2 of pg. 22, recite(s) [lines 17-24 of pg. 7] “Merging the first image, the second image… may include identifying preferred or desired parameters or quantities of each image, generating a corresponding weight map for each image, and combining the first image, the second image… using weighted blending based on the weight maps. Preferably, regions of low detail and/or low contrast are allocated a low weight in the respective weight map. Alternatively, or additionally, regions of zero or near-zero brightness and regions of maximum or near-maximum brightness are allocated a low weight in the respective weight map.” [lines 15-27 of pg. 21 to lines 1-2 of pg. 22] “One way of achieving this outcome is to analyse and generate a corresponding weight map for each of the first and second night vision images. The weight map allocates a weighting to each region (or indeed each pixel) of the image according to predetermined criteria. For example, areas of very high brightness and of very low brightness (which might indicate a lack of detail) can be given a low weighting, thus prioritising image data in the mid-ranges which are likely to contain more detail. This, for example, would place a low priority on “bleached” areas of an image and likewise very dark areas of an image, with the expectation that greater detail in the respective areas will be obtained from other images with different levels of illumination. In another example, areas containing or bounded by high contrast (which might indicate the presence of detail or of object boundaries) might be given a high weighting, whereas areas of low contrast (which might also identify over-exposed and/or underexposed areas of an image) might be given a low weighting. When the respective night vision images are then merged, for example using weighted blending, the weight maps determine the extent to which each region (or indeed each pixel) contributes to the merged image.” , where giving weightings of at least “over-exposed… areas” of an image a “low weighting” and “areas containing or bounded by high contrast (which might indicate the presence of detail…)” in an image a “high-weighting” is changing detail information of at least a first image and a second image (e.g., “first and second night vision images”) based on comparing the brightness information of the first image through a difference of the brightness information of the first image (e.g., a difference in “levels of illumination” in areas of the image; such as “over-exposed” or “bleached” areas, and areas that are not “over-exposed” such as “areas containing or bounded by high contrast”)). Since Alsheuski also discloses generating a final image by fusing a first image and a second image of at least a visible image and an infrared image (lines 21-23 of pg. 8 and lines 4-7 of pg. 9, recite(s) [lines 21-23 of pg. 8] “…The one or more additional imaging modules may be selected from the group comprising… an infrared camera or infrared scope, and a visible camera or visible scope…” [lines 4-7 of pg. 9] “…the image processing means may be configured to generate the merged night vision image from the first and second night vision image, any additional night vision images, and one or more images obtained by the additional imaging module…” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng, as modified by Peng (CN) and Peng (US), to incorporate comparing the brightness information of the first image and changing weightings of the detail information of the first and second images based on the comparing of the brightness information of the first image through a difference of the brightness information of the first image to improve the final image generated by fusing the first and second images by compensating for areas in at least the first image with lack of detail due to brightness differences such as overexposure as taught by Alsheuski above. Where Ng, as modified by Peng (CN), Peng (US), and Alsheuski, does not specifically disclose comparing the brightness information of the first image with a threshold value; and …weightings of the detail information …based on the comparing of the brightness information of the first image with a threshold value through a difference between the brightness information of the first image and the threshold value; Wan teaches in the same field of endeavor of fusing a first image and a second image comparing the brightness information of the first image with a threshold value (description, para. [0025-0027] and [0037], recite(s) [0025] “An acquisition module, configured to acquire a visible light image and a near infrared light image of a photographed object;” [0026] “A first determination module is configured to determine the severity of detail loss of each pixel of the visible light image according to a position of each pixel of the visible light image in an HSV color space of the visible light image;” [0027] “A second determination module is configured to determine the position of the pixel where the detail loss severity exceeds a threshold as a first area, where the first area is an area in the visible light image that needs to be enhanced in detail;” [0037] “A brightness acquisition module is configured to acquire a detail loss severity Wv of a brightness V channel of each pixel in the HSV color space;” , where determining if “each pixel” in the “visible light image” has a “detail loss severity exceed[ing] a threshold” includes determining a “brightness V channel of each pixel” in the “visible light image” is comparing the brightness information of the first image with a threshold value); and …weightings of the detail information …based on the comparing of the brightness information of the first image with a threshold value through a difference between the brightness information of the first image and the threshold value (description, para. [0025-0027] and [0037], recite(s) [0028] “A wavelet decomposition module is configured to perform wavelet decomposition on the first region to obtain a first sub-waveband, and perform wavelet decomposition on a second region in the near-infrared light image to obtain a second sub-waveband, wherein the second region and the first region are the same part of the photographed object;” [0029] “a sub-band processing module, configured to merge the second sub-band into the first sub-band to obtain a merged first sub-band;” [0030] “The inverse wavelet transform module is configured to perform an inverse wavelet transform on the fused first sub-waveband to generate a detail-enhanced visible light image.” [0034] “The second detail sub-band is merged into the first detail sub-band according to a weighted average algorithm.” , where merging details of a “near-infrared light image” (e.g., a “second detail sub-band”) into details of the first image (e.g., “first detail sub-band”) based on if the “detail loss severity exceeds a threshold” is acquiring weightings of detail information (e.g., “detail sub-band[s]”) based on the difference between the brightness information of the first image (e.g., “brightness V channel of each pixel” in the “visible light image” as previously recited in para. [0037] above) and a threshold value (e.g., the “threshold” for “detail loss severity” determination as previous recited in para. [0027] above)). Since each of Alsheuski and Wan discloses overexposed areas in an image comprise of a lack and/or loss of detail (lines 15-27 of pg. 21 to lines 1-2 of pg. 22 of Alsheuski—see citation above—; and description, para. [0004], of Wan recite(s): [0004] “…If the dynamic range of light intensity in the scene exceeds the dynamic range of light intensity that the camera can capture, the camera can only capture a portion of the total light intensity, resulting in partial data loss in the imaging area where the light intensity exceeds the dynamic range of the camera. For example, camera imaging may produce overexposed areas or underexposed areas, where some image details are lost.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng, as modified by Peng (CN), Peng (US), and Alsheuski, to incorporate a threshold value for comparing the brightness information of the first image and changing the weightings of the detail information of the first and second images based on the difference between the brightness information of the first image and the threshold value to determine areas in the first image that are overexposed and needing more detail compensation. Regarding claim 15, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Claims 6-7 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ng, as modified by Peng (CN), Peng (US), Alsheuski, and Wan, as applied to claims 5 and 15 above, and further in view of Awad et al. (Awad; “Adaptive Near-Infrared and Visible Fusion for Fast Image Enhancement,” 2020). Regarding claim 6, Ng, as modified by Peng (CN), Peng (US), Alsheuski, and Wan, discloses the method as claimed in claim 5, wherein Alsheuski further teaches the greater the difference between the brightness information of the first image and the threshold value, the higher the weighting of the detail information in the first image(lines 17-24 of pg. 7, lines 15-27 of pg. 21 to lines 1-2 of pg. 22, lines 21-23 of pg. 8, and lines 4-7 of pg. 9—see citations in claim 5 above—, where giving weightings from at least “over-exposed… areas” of an image a “low weighting” is giving the detail information of at least a first image (e.g., an image from “a visible camera”) a higher weighting when the difference is greater between the brightness information of the first image and a threshold value (e.g., an “over-exposed” area in the image)) Where Ng, as modified by Peng (CN), Peng (US), Alsheuski, and Wan, does not specifically disclose ; Awad teaches in the same field of endeavor of fusing a first image and a second image (2nd para. in col. 2 of pg. 411 and 1st para. in col. 1 of pg. 412, recite(s) [2nd para. in col. 2 of pg. 411] “…The fusion map F guides the fusion by determining (a) the regions with spatial details which are only apparent in I-NIR and missed in IVS… To estimate F, we first extract the luminance plane Y from IVS, then F is defined as the relative difference in local contrast between I-NIR and Y.” [1st para. in col. 1 of pg. 412] “…Both local image gradient and local image contrast are key values that assess the spatial details of an image. The magnitude of image gradients has low value for blurred images and the local image contrast has low value for smooth regions. Thus, by employing LC [Local Contrast] in F as in (3), the fusion map F has large values for the regions that have better spatial details in I-NIR compared to IVS, and low values (or zeros) for other regions where the spatial details of IVS is better. Hence, F will serve as our adaptive selector of the amount of fusion (injection) of the spatial details from I-NIR to produce JVS [the fused image]…” , where giving less weight to details in the NIR image (INIR) than weight to details in the visible image (IVS) based on a “relative difference in local contrast” is giving a higher weighting of detail information in the second image (e.g., the visible image) when there is less of a difference between the brightness information of the first image and a threshold value (e.g., areas in the visible image with “local contrast” that comprise of less “spatial details”)). Since Alsheuski discloses that overexposed areas in an image comprise of low contrast (lines 15-27 of pg. 21 to lines 1-2 of pg. 22 of Alsheuski—see citation in claim 5 above) where both Alsheuski and Wan disclose that overexposed areas in images comprise of a lack and/or loss of detail (lines 15-27 of pg. 21 to lines 1-2 of pg. 22 of Alsheuski and para. [0004], of Wan—see citations in the motivation to combine Ng, as modified by Peng (CN), Peng (US), and Alsheuski, and Wan claim 5 above), a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that changing the weightings of the detail information of the first and second images as disclosed by Ng in claim 5 above in the system of Ng, as modified by Peng (CN), Peng (US), Alsheuski, and Wan, can further incorporate giving the detail information in the second image a higher weighting when there is less of a difference between the brightness information of the first image and the threshold value to improve the final merged image by ensuring that poor spatial details from the second image (e.g., an NIR image) are not fused with the first image (e.g., a visible image) and that any spatial details in the first image are attenuated during the fusion process (Awad; 4th para. in col. 1 of pg. 412, recite(s) [4th para. in col. 1 of pg. 412] “…Additionally, for the regions where the captured spatial details of IVS are attenuated compared to their counterparts in I-NIR, F will be large to boost the injected high frequency contents from I-NIR. On the contrary, the other regions where the spatial details of IVS are better than of the I-NIR, F → 0 and the second term in Eq. (5) vanishes or has very little effect…” ). Regarding claim 7, Ng, as modified by Peng (CN), Peng (US), Alsheuski, Wan, and Awad, discloses the method as claimed in claim 6, wherein Ng further discloses weightings of the color information in the first image and the color information in the second image as being referenced in the fusing step are changed based on the difference between the brightness information of the first image and the threshold value (para. [0055]—see citation in claim 5 above—, where para. [0074] further recite(s): [0074] “Information from multiple image sources can be combined into a compact form of image that contains more information than any single source image. Image fusion from different sensory modalities (e.g., visible light and near-infrared image sensors) is challenging as the images that are fused contain different information (e.g., colors, brightness, and details). For example, objects with strong infrared emission (e.g., vegetation, red road barrier) appear to be brighter in an NIR image than in an RGB image. After the RGB and NIR images are fused, color of a resulting fused image tends to deviate from the original color of the RGB image. In some embodiments, a proper color correction algorithm is applied bring the color of the resulting fused image to a natural look. As explained above with reference to FIG. 6 , pixel values of the RGB and NIR images are different, and a radiance value of a pixel of the same object point in the scene may be adjusted to the same dynamic range. The pixel values in an image domain are transformed to radiance values in a radiance domain, and the radiance values that are normalized into the same dynamic range are combined (e.g., averaged). In an example, the NIR image 604 is converted into a grayscale image and fused with the channel L* information of the RGB image 602, and the fused radiance image 620 is combined with color channel information (i.e., channel a* and b* information) of the RGB image 602 to recover a fused pixel image 624 with color.” , where “adjust[ing]” the radiance value of pixels to adjust the color of the final “fused pixel image” is changing the weightings of the color information in the first and second images based on a difference between the brightness information (e.g., “radiance values”) and the threshold value (e.g., “original color value” of the RGB image)). Regarding claim 16, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Regarding claim 17, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ng, as modified by Peng (CN) and Peng (US), as applied to claim 10 above, and further in view of Chen et al. (Chen; US 2021/0314501 A1). Regarding claim 11, Ng, as modified by Peng (CN) and Peng (US), discloses the electronic system as claimed in claim 10, wherein Chen teaches in the same field of endeavor of obtaining a first and a second image the electronic system as claimed in claim 10 further comprising: a light source, configured to emit a light within the second spectrum (para. [0048], recite(s) [0048] “In descriptions of the light filling device 40, the light filling device may include an infrared light filling lamp and/or a white light filling lamp. In practical applications, the light filling device is not specially limited in an embodiment of this application, and the light filling device may a full-spectrum light filling lamp including the infrared light filling lamp and the white light filling lamp, and of course, may also only be the infrared light filling lamp, which is not limited herein. The processor provided in an embodiment of this application can perform the logical light splitting on the original image to obtain the visible light image and the infrared image. Therefore, it can be ensured that the infrared light filling does not affect the visible light, and the white light filling does not affect the infrared image. Based on this, in a low-light environment, the brightness and imaging effect of the infrared image can be improved through the infrared light filling, and the brightness and imaging effect of the visible light image can be improved through the white light filling. In this manner, it can be ensured that the better quality infrared image and the better quality visible light image can be obtained even in the low-light environment, and thus the final fusion image effect can be improved.” , where the “infrared light filling lamp” is a light source emitting light within a second spectrum (e.g., infrared light)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Ng, as modified by Peng (CN) and Peng (US), to incorporate a light source emitting light within the second spectrum to improve the quality of outputting the second image (e.g., NIR image) from a second sensor in low-light environments as taught by Chen (para. [0048]—see citation above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Vanmali et al. (“Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility,” 2017) discloses in the abstract and section 3.1 on pg. 1066: [abstract] “Image visibility is affected by the presence of haze, fog, smoke, aerosol, etc. Image dehazing using either single visible image or visible and near-infrared (NIR) image pair is often considered as a solution to improve the visual quality of such scenes. In this paper, we address this problem from a visible–NIR image fusion perspective, instead of the conventional haze imaging model. The proposed algorithm uses a Laplacian–Gaussian pyramid based multi-resolution fusion process, guided by weight maps generated using local entropy, local contrast and visibility as metrics that control the fusion result. The proposed algorithm is free from any human intervention, and produces results that outperform the existing image-dehazing algorithms both visually as well as quantitatively. The algorithm proves to be efficient not only for the outdoor scenes with or without haze, but also for the indoor scenes in improving scene visibility.” [3.1 Weight map generation] “The weight maps play a critical role in the outcome of the final fused result. The weight maps generated should have a non-negative value and should lie in the range of [0, 1]. The weight should sum up to 1 at each pixel. One needs to keep in mind the characteristics of the visible and NIR images while selecting measures to generate the weight maps. We use the following measures to generate the weight maps.” Morales Correa (US 2023/0021812 A1) discloses in the abstract and para(s). [0034]: [abstract] “An endoscopic camera device having an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure;
Read full office action

Prosecution Timeline

Apr 06, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month