DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1. This action is in response to the amendment filed on 11/12/2025. Claims 1 and 6 have been amended. Claims 1-12 remain rejected in the application.
Response to Arguments
2. Applicant’s arguments with respect to claim 1, and similarly claim 6, filed on 11/12/2025, with respect to the rejection under 35 U.S.C. 103 regarding that the prior art does not teach the limitation(s): “applying a plurality of gain values that is independent of the weight map and relevant to region properties of the plurality of regions of interest to the input image for respectively generating a plurality of duplicated images” has been fully considered, but is moot because of new grounds for rejection. Claim 1, and similarly claim 6, are now disclosed by Saheli, Kameda, and Xu.
3. Regarding arguments with respect to claims 2-5 and 7-12, they are dependent on independent claims 1 and 6 respectively. Applicant does not argue anything other than independent claim 1 and similarly claim 6. The limitations in those claims, in conjunction with combination, has previously been established and explained.
Claim Rejections - 35 USC § 103
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Saheli et al. (US-2024/0259693-A1, hereinafter "Saheli") in view of Kameda et al. (US-2024/0380996-A1, hereinafter "Kameda"), and further in view of Xu (CN-108259774-A). (Examiner’s note: Citations to Xu use the original CN-108259774-A document locations.)
6. As per claim 1, Saheli discloses: An image processing method of increasing image quality, comprising: (Saheli, page 1, ¶ [0006], “It would be desirable that such cameras are capable of capturing the scenery more appropriately and particularly to compensate for the strong variations of the light conditions so as to improve the image quality.”)
setting a plurality of regions of interest within an input image; (Saheli, Abstract, “The imaging data is grouped in one or more regions.”)
generating a weight map in accordance with the plurality of regions of interest identified from the input image; (Saheli, Abstract, “The imaging data is grouped in one or more regions. Each region is associated with a weighting coefficient.”)
[[applying a plurality of gain values that is independent of the weight map and relevant to region properties of the plurality of regions of interest to the input image for respectively generating a plurality of duplicated images; and]]
[[utilizing the weight map to synthesize the plurality of duplicated images for acquiring an output image.]]
7. Saheli is not relied on for the below claim language. However, Kameda discloses: applying a plurality of gain values that is independent of the weight map and relevant to region properties of the plurality of regions of interest to the input image for respectively generating a plurality of duplicated images; and (Kameda, Abstract, “An image capturing apparatus includes a pixel portion in which a plurality of pixels are arranged, first acquisition unit for amplifying first image signals obtained by exposing the pixel portion, with multiple different gains, and acquiring multiple images respectively amplified with the multiple different gains ...” and [0033], “A pixel area (pixel portion) 208 is configured such that a plurality of unit pixels 200 are arranged in a matrix. For ease of description, the present embodiment shows a configuration in which n pixels are arranged in the horizontal direction and 4 pixels are arranged in the vertical direction, but typically, the matrix has a configuration in which multiple pixels are arranged in both the horizontal and vertical directions.” and [0024], “For example, there is a method in which image signals with a high amount of gain are used for an image portion having a predetermined signal level or less, and image signals with a low amount of gain are used for an image portion (bright and white-out image) having a signal level exceeding the predetermined level, and these image signals are combined.”)
8. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the image processing method of Saheli to include the disclosure of applying a plurality of gain values that is independent of the weight map and relevant to region properties of the plurality of regions of interest to the input image for respectively generating a plurality of duplicated images, of Kameda. The motivation for this modification could have been to produce a plurality of gain images, based on the input image, to fully represent the wide range of gains/exposures. The plurality of gain images can then be filtered to determine the best gain value for a particular reason and can ultimately be combined to create a higher dynamic range version of the image. This could help adjust for both bright and dark lighting elements in an image.
9. Saheli in view of Kameda is not relied on for the below claim language. However, Xu discloses: utilizing the weight map to synthesize the plurality of duplicated images for acquiring an output image. (Xu, page 4, [0010], “According to the image synthesis method described above, it involves acquiring two exposure images of the target object with different exposure times, analyzing the pixel data at the same position in the two exposure images to obtain the image fusion weight value, and using this weight value to fuse the two exposure images to obtain a synthesized image. This synthesized image is composed of images with different exposure times. Pixels at the same position can capture more image details, thus improving the dynamic range of the synthesized image.”)
10. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the image processing method of Saheli in view of Kameda to include the disclosure of utilizing the weight map to synthesize the plurality of duplicated images for acquiring an output image, of Xu. The motivation for this modification could have been to generate a new, final image based on the plurality of duplicated gain images and weight map to specifically target areas that may require more or less gain. This would create a final image where proper gain/exposure in a region can be chosen based on the weight map. Thus, by choosing the best image regions, this not only could help increase the final image’s dynamic range, but also ensures all regions of interest in the image can be clearly seen.
11. As per claim 2, Saheli in view of Kameda, and further in view of Xu discloses: The image processing method of claim 1, further comprising:
setting a plurality of weighting values of the weight map respectively corresponding to the plurality of regions of interest in accordance with a number and distribution of the plurality of regions of interest. (Saheli, Abstract, “The imaging data is grouped in one or more regions. Each region is associated with a weighting coefficient.”)
12. As per claim 3, Saheli in view of Kameda, and further in view of Xu discloses: The image processing method of claim 1, wherein the weight map is generated based on the input image instead of the plurality of duplicated images. (Saheli, Abstract, “The method includes modifying one or more of the weighting coefficients based on light intensity values obtained from the imaging data and being of the associated regions.”)
13. As per claim 4, Saheli in view of Kameda, and further in view of Xu discloses: The image processing method of claim 1, further comprising:
utilizing a weight smooth function to calibrate difference in a plurality of weighting values of the weight map respectively corresponding to the plurality of regions of interest. (Saheli, page 5, ¶ [0080], “The weighting coefficients may be obtained by at least one, at least two, or all three of the following configurations: ... a previous merged weight configuration 7, e.g., for some temporal smoothing the results of the previous frame, or for the inclusion of other history information.”)
14. As per claim 5, Saheli in view of Kameda, and further in view of Xu discloses: The image processing method of claim 1, further comprising:
applying one of the plurality of gain values to all pixels of the input image for generating one corresponding duplicated image. (Kameda, Abstract, “An image capturing apparatus includes a pixel portion in which a plurality of pixels are arranged, first acquisition unit for amplifying first image signals obtained by exposing the pixel portion, with multiple different gains, and acquiring multiple images respectively amplified with the multiple different gains ...” and [0033], “A pixel area (pixel portion) 208 is configured such that a plurality of unit pixels 200 are arranged in a matrix. For ease of description, the present embodiment shows a configuration in which n pixels are arranged in the horizontal direction and 4 pixels are arranged in the vertical direction, but typically, the matrix has a configuration in which multiple pixels are arranged in both the horizontal and vertical directions.” and [0091], “In the HDR shooting in the first embodiment, noise reduction processing is performed on each of the exposed images having different gains as described above, and the exposed images subjected to the noise reduction processing are combined, thereby obtaining a noise-reduced HDR image.”; Examiner’s note: A pixel matrix configuration could encompass an entire image. Thus, the image would contain all pixels.)
15. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the image processing method of claim 1 of Saheli in view of Xu to include the disclosure applying one of the plurality of gain values to all pixels of the input image for generating one corresponding duplicated image, of Kameda. The motivation for this modification could have been to produce gain images that have a consistent gain value. By generating a plurality of these images, it would be possible to determine which gain image is best for a region of the input image.
16. Claim 6 is similar in scope to claim 1 except for additional limitations that Saheli in view of Kameda, and further in view of Xu discloses: An image processing device of increasing image quality, comprising:
an image receiver adapted to receive an input image; and (Saheli, Abstract, “A computer-implemented method for controlling the exposure time of an imaging device includes obtaining imaging data from the imaging device.”)
an operation module electrically connected with the image receiver, the operation module … (Saheli, page 1-2, ¶ [0015], “The method according to the present disclosure can be employed to any kind of imaging device. The imaging device may be a camera, such as an in-cabin camera of a vehicle. For instance, it could be an NIR or a color (RGB)-IR cabin camera. It is appreciated that the method according to the present disclosure allows to adapt an auto-exposure control of such cabin cameras.”)
Claim 6 is also rejected under the same rationale as claim 1, described above. The motivation for this modification is the same as claim 1.
17. As per claim 7, Saheli in view of Kameda, and further in view of Xu discloses: The image processing device of claim 6, wherein the plurality of regions of interest are a high intensity area and a low intensity area of the input image, or are a foreground area and a background area of the input image. (Saheli, Abstract, “The imaging data is grouped in one or more regions. Each region is associated with a weighting coefficient. The method includes modifying one or more of the weighting coefficients based on light intensity values obtained from the imaging data and being of the associated regions. The method includes determining, based on the obtained imaging data and the weighting coefficients, a light intensity value for at least a part of the imaging data.” and page 1-2, ¶ [0012], “The term “light intensity” (also used as “intensity” herein for brevity) may be the power per unit area and can be referred to as a physical quantity. ... In one example, the intensity value may be defined in digital numbers of pixels.” and page 2, ¶ [0022], “As an example, in case elements with rather high intensities are weighted with a value of two, while elements with rather low intensities are weighted with a value of one (e.g. remain the same) …” and page 1, ¶ [0003], “One example is an in-cabin sensing camera for monitoring the driver and/or cabin state.”)
18. As per claim 8, Saheli in view of Kameda, and further in view of Xu discloses: The image processing device of claim 6, wherein the operation module transforms the input image simultaneously into the weight map (Saheli, Abstract, “The imaging data is grouped in one or more regions. Each region is associated with a weighting coefficient. The method includes modifying one or more of the weighting coefficients based on light intensity values obtained from the imaging data and being of the associated regions. The method includes determining, based on the obtained imaging data and the weighting coefficients, a light intensity value for at least a part of the imaging data.” and Saheli, page 5, ¶ [0080], “The weighting coefficients may be obtained by at least one, at least two, or all three of the following configurations: A dynamic weight configuration 5, e.g., using inputs from high-level algorithms such as face detection or body detection, a static configuration 6, e.g., fixed settings for a given cabin, and a previous merged weight configuration 7, e.g., for some temporal smoothing the results of the previous frame, or for the inclusion of other history information.”) and the plurality of duplicated images respectively by the plurality of regions of interest and the plurality of gain values. (Kameda, Abstract, “An image capturing apparatus includes a pixel portion in which a plurality of pixels are arranged, first acquisition unit for amplifying first image signals obtained by exposing the pixel portion, with multiple different gains, and acquiring multiple images respectively amplified with the multiple different gains ...” and Saheli, Abstract, “The imaging data is grouped in one or more regions.”)
19. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the image processing device of claim 6 of Saheli in view of Xu to include the disclosure of the input image simultaneously transforming into the weight map and the plurality of duplicated images respectively by the plurality of regions of interest and the plurality of gain values, of Kameda. The motivation for this modification could have been to generate new gain versions of the input image, along with a weight map that details regional properties of the image. Having these transformed versions of the input image provides an opportunity for image processing, such as determining which image regions have proper gain values. The results can ultimately be combined into a final image that can increase the image’s overall dynamic range.
20. Claim 9, which is similar in scope to claims 2 and 6, is thus rejected under the same rationale as described above.
21. Claim 10, which is similar in scope to claims 3 and 6, is thus rejected under the same rationale as described above.
22. Claim 11, which is similar in scope to claims 4 and 6, is thus rejected under the same rationale as described above.
23. Claim 12, which is similar in scope to claims 5 and 6, is thus rejected under the same rationale as described above. The motivation for this modification is the same as claim 5.
Conclusion
24. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
25. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW CLOTHIER/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614