Prosecution Insights
Last updated: April 19, 2026
Application No. 18/117,890

METHOD AND ELECTRONIC DEVICE FOR DIGITAL IMAGE ENHANCEMENT ON DISPLAY

Non-Final OA §103
Filed
Mar 06, 2023
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 5 January 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Claims 1-20 are pending in this application and have been considered below. Priority Receipt is acknowledged that application claims priority to foreign application with application number IN 2021 41055356 dated 30 November 2021. Additionally, Receipt is acknowledged that application is a National Stage application of PCT KR22/19205. Priority to PCT KR22/19205 with a priority date of 30 November 2022 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDSs dated 6 March 2023, 8 June 2023, 22 February 2024, 8 January 2025, 23 January 2025, 22 April 2025 and 29 August 2025 that have been previously considered remain placed in the application file. Specification - Drawings Acknowledgement is made of the color drawings submitted 6 March 2023 in this application. Applicants are reminded that, absent a successful petition, the black and white drawings submitted on 6 March 2023 will be used. No petition is currently on file. Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 11 recites “at least one of” then listing “a peak brightness of the display of the electronic device, a color temperature of the display, a color temperature of the original image, a luminance of the original image, and a color space of the original image.” Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Alternately, claims 1-4, 6 10, 12, 14-17 and 19 recite “and” before stating additional limitations. In this case, applicant has presented a categorical list, all of which must be present in order to reject the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 6-9, 11-16 and 19-20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0318696 A1, (Imai et al.) in view of US Patent Publication 2022 0036523 A1, (Moran et al.). Claim 1 [AltContent: textbox (Imai et al. Fig. 8, showing a device with ambient light adjustment.)] PNG media_image1.png 577 682 media_image1.png Greyscale Regarding Claim 1, Imai et al. teach a method for digital image enhancement on a display of an electronic device ("systems and methods are provided for mitigating physical and/or physiological reductions in the apparent colorfulness of an image displayed on an electronic device display, in various ambient lighting conditions," paragraph [0019]), the method comprising: receiving, by the electronic device, an original image ("an input image 700 may have a representative color gamut 402. It is desired that a viewer, viewing display 110 of device 100, views image 700 with color gamut 402 in any of various ambient light conditions," paragraph [0055]); sensing, by the electronic device, an ambient light ("device 100 includes one or more ambient light sensors, which may be implemented as display- integrated ambient light sensors 113 or ambient light sensors 103 that are separate from the display," paragraph [0022]); generating, by the electronic device, a virtual content appearance of the original image based on the ambient light and characteristics of the display of the electronic device ("One or more image transformations 716 are performed for input (original) image 700. As shown in FIG. 7, image transformations 716 may include a transformation from International Commission on Illumination (CIE) red green-blue (RGB) values to tristimulus values at block 718, a transformation of the tristimulus values to LMS cone signals (image cone responses) at block 720, and a transformation from the LMS cone signals to IPT values or other perceptually uniform color space color and brightness values," paragraph [0057]); wherein generating the virtual content appearance of the original image comprises: determining, by the electronic device, an illuminance factor of viewing conditions based on content of the original image, the ambient light and characteristics of the display of the electronic device ("Compensated color gamut 602 is generated by applying compensation factor B to color gamut 600 (e.g., by multiplying the P values of the original image by the P-component, Bp, of factor B and by multiplying the T values of the original image by the T-component, Br, of factor B)," paragraph [0054]); determining, by the electronic device, a compensating color tone for the original image based on the virtual content appearance ("At block 730, the tristimulus values of the bleached image are also transformed into IPT values for the bleached image," paragraph [0059] where an IPT value is a compensating color tone); modifying, by the electronic device, the original image based on the compensating color tone for the original image ("As shown in FIG. 7, various inverse transformation operations 734 may be applied to the compensated IPT values to generate the compensated image," paragraph [0062]); and displaying, by the electronic device, the modified original image for a current viewing condition ("generate images on display 110 that have a colorfulness that, when viewed under the current ambient light in the environment around the device, substantially matches the intended colorfulness of the image," paragraph [0068]). [AltContent: textbox (Moran et al. Fig. 2, showing a neural network with low and high level blocks that adapt an image for luminance.)] PNG media_image2.png 501 724 media_image2.png Greyscale Imai et al. do not explicitly teach all of an artificial intelligence model. However, Moran et al. teach inputting the illuminance factor of the viewing conditions and the original image into a first artificial intelligence (AI) model ("In another example, the high-level block could learn a scaling curve that has the effect of adjusting the global luminance of the image," paragraph [0066] and "A non-limiting embodiment of the high-level block neural network architecture is shown in FIG. 7." paragraph [0058]); and generating, by the first Al model, the virtual content appearance of the original image ("The x-axis of the curve is the luminance and the y-axis is the scale factor to apply to pixels to adjust the luminance. This curve boosts the low-luminance pixels by 50 times, leaving the high-luminance pixels alone. The adjustment is performed in Lab space, adjusting the L channel," paragraph [0066]). Therefore, taking the teachings of Imai et al. and Moran et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Ambient light Color Compensation Systems and Methods for Electronic Device Displays” as taught by Imai et al. to use the Neural Network/AI module as taught by Moran et al. The suggestion/motivation for doing so would have been that, “However, this method does not ensure that there is consistency between different scales of the image when performing local pixel adjustments. Furthermore, the method does not allow for properties of the image to be adjusted independently of other properties.” as noted by the Moran et al. disclosure in paragraph [0008], which also motivates combination because the combination would predictably have a higher efficiency at correcting images as there is a reasonable expectation that an AI module would increase efficiency as well as adapting to various parts of an image; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of device claim 14 while noting that the rejection above cites to both device and method disclosures. Claim 14 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Imai et al. teach the method of claim 1, as noted above, wherein generating the virtual content appearance of the original image comprises: estimating, by the electronic device, an appearance of a color tone of the content in the original image based on the illuminance factor of the viewing conditions ("an input image 700 may have a representative color gamut 402. It is desired that a viewer, viewing display 110 of device 100, views image 700 with color gamut 402 in any of various ambient light conditions," paragraph [0055]). Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach generating, by the electronic device, the virtual content appearance of the original image based on the estimated appearance of the color tone of the content in the original image using the first Al model ("Then, a high-level network learns the dynamic range correction and tone mapping," paragraph [0008]). Imai et al. and Moran et al. are combined as per claim 1. Claim 3 Regarding claim 3, Imai et al. teach the method of claim 2, wherein determining, by the electronic device, the illuminance factor of the viewing conditions comprises: determining, by the electronic device, a tri-stimulus value of a virtual illuminant of the viewing conditions based on the original image, ambient light data and the characteristics of the display of the electronic device ("At block 730, the spectral power distribution of the ambient light may be determined (if not received from the sensor) and combined (e.g., convolved or integrated) with color matching data 532 and display reflectance data 534 to determine tristimulus values for the ambient light that is reflected by the display," paragraph [0056]); determining, by the electronic device, chromaticity co-ordinates for the virtual illuminant of the viewing conditions based on the determined tri-stimulus value ("The color generated by a display such as display 110 may be represented by chromaticity values x and y. The chromaticity values may be computed by transforming, for example, three color intensities ( e.g., intensities of colored light emitted by a display) such as intensities of red, green, and blue light into three tristimulus values," paragraph [0041]); determining, by the electronic device, a luminance of the virtual illuminant based on a coordinate of the determined tri-stimulus value ("display reflectance data may be measured that describes the color distribution of reflected light 230 under various types of ambient illumination 228 ( e.g., direct sunlight, reflected sunlight, filtered sunlight, polarized sunlight, fluorescent light, incandescent light, firelight, or other forms of ambient light). This display reflectance data may be stored (e.g., in memory of each device 100 or remotely accessible memory) so that, when ambient light is measured by one or more of ambient light sensors 103 and/or 113, the amount, distribution, and color of the portion of that light that is reflected from the display can be determined ( e.g., by looking up or calculating the properties of the reflected light by modifying the measured incident ambient light with the known display reflectance properties in the stored display reflectance data)," paragraph [0039]); and determining, by the electronic device, the illuminance factor of the viewing conditions based on the tri-stimulus value and the chromaticity co-ordinates for the virtual illuminant of the viewing conditions ("A colorfulness compensation factor, for compensating for the presence of ambient light, may be implemented as a multiplicative color compensation factor "B" to the color channels P and T," paragraph [0053]). Claim 6 Regarding claim 6, Imai et al. teach the method of claim 1, wherein determining, by the electronic device, the compensating color tone for the original image comprises: performing, by the electronic device, a pixel difference calculation of a color tone of the original image and a color tone of the virtual content appearance ("Each layer may be encoded as matrices or vectors of weights expressed in the form of coefficients or constants," paragraph [0054]); generating, by the electronic device, a color compensation matrix ("In order to correct observed gamut 406 to more closely match intended gamut 402, processing circuitry of device 100 generates and applies a color compensation to the image to be displayed based on the measured ambient light and the known display reflectance properties stored in the display reflectance data," paragraph [0044]) for each of a red (R) channel, a green (G) channel, and a blue (B) channel based on the pixel difference calculation of the color tone of the original image and the color tone of the virtual content appearance ("The ambient light data may be raw channel data from the ambient light sensor or may include processed ambient light data such as a spectral power distribution of the ambient light," paragraph [0056] where channel data is each of a red, green and blue channel); and determining, by the electronic device, the compensating color tone for the original image based on the color compensation matrix ("ambient light estimation operations may include combining a spectral power distribution 806 determined based on ambient light measurements from one or more ambient light sensors 103/113 with color matching data 532 and display spectral reflectance data 534 (e.g., via a convolution or integration) to form expected reflection data such as reflected light tristimulus values 808 ( denoted XYZR) of a portion of the ambient light that is reflected by the display. However, it should be appreciated that in some scenarios reflected light tristimulus values 808 may be determined directly from one or more channel readings of ambient light sensor(s) 103/113 (e.g., without first computing the spectral power distribution)," paragraph [0064]), wherein the compensating color tone for the original image allows a user to view the original image in an original color tone in the current viewing condition ("generate images on display 110 that have a colorfulness that, when viewed under the current ambient light in the environment around the device, substantially matches the intended colorfulness of the image," paragraph [0068]). Claim 7 Regarding claim 7, Imai et al. teach the method of claim 1, wherein modifying, by the electronic device, the original image comprises: determining, by the electronic device, a plurality of pixels in the original image that are impacted by the ambient light based on the virtual content appearance ("may be commons values for all pixels of an image, derived from ratios of the average or median P and T values respectively, or may be determined and applied for each pixel or several groups of pixels," paragraph [0054]); applying, by the electronic device, the compensating color tone for the original image to each of the plurality of pixels in the original image ("Color gamut 600 may, for example, represent the same color gamut as gamut 402 of FIG. 4, but in the IPT color space. The conversion between the chromaticity space of FIG. 4 and the colorfulness space of FIG. 6 is described in further detail below," paragraph [0054]); and modifying, by the electronic device, a color tone of content in the original image based on the compensating color tone for the original image ("Color gamut 600 may, for example, represent the same color gamut as gamut 402 of FIG. 4, but in the IPT color space. The conversion between the chromaticity space of FIG. 4 and the colorfulness space of FIG. 6 is described in further detail below," paragraph [0054]). Claim 8 Regarding claim 8, Imai et al. teach the method of claim 1, as noted above. Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach generating, by the electronic device, a color compensated original image for the current viewing condition using a second Al model ("The output 28 of the neural network is a color corrected RGB frame with a dynamic range suitable for display on standard devices (for example, devices with 256 levels per color channel)," paragraph [0038]); and displaying, by the electronic device, the color compensated original image for the current viewing condition ("The output 28 of the neural network is a color corrected RGB frame with a dynamic range suitable for display on standard devices (for example, devices with 256 levels per color channel)," paragraph [0038] where suitable for teaches displaying). Imai et al. and Moran et al. are combined as per claim 1. Claim 9 Regarding claim 9, Imai et al. teach the method of claim 8, as noted above. Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach wherein the second Al model is trained based on a plurality of modified original images ("The single end-to-end trainable neural network can learn the ISP mapping from the input data 20 to a high-quality image output 28 based on a representative training dataset of input raw data and output digital image pairs," paragraph [0039]). Imai et al. and Moran et al. are combined as per claim 1. Claim 11 Regarding claim 11, Imai et al. teach the method of claim 1, wherein the characteristics of the display of the electronic device comprise at least one of: a peak brightness of the display of the electronic device, a color temperature of the display ("display reflectance data may be measured that describes the color distribution of reflected light 230 under various types of ambient illumination 228 ( e.g., direct sunlight, reflected sunlight, filtered sunlight, polarized sunlight, fluorescent light, incandescent light, firelight, or other forms of ambient light)," paragraph [0039]), a color temperature of the original image, a luminance of the original image, and a color space of the original image ("Compensated color gamut 602 is generated by applying compensation factor B to color gamut 600 (e.g., by multiplying the P values of the original image by the P-component, Bp, of factor B and by multiplying the T values of the original image by the T-component, Br, of factor B)," paragraph [0054] where B is a luminance and T is a color temperature of the original image). Claim 12 Regarding claim 12, Imai et al. teach the method of claim 1, as noted above. Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach wherein the ambient light comprises a luminance of the ambient light and a correlated color temperature of the ambient light ("The low-level block 21 performs local pixel adjustments that demosaic, denoise and correct the local luminance and color in the image," paragraph [0038]). Imai et al. and Moran et al. are combined as per claim 1. Claim 13 Regarding claim 13, Imai et al. teach the method of claim 1, wherein the virtual content appearance of the original image comprises a presentation of contents of the original image in the current viewing condition of the ambient light ("generate images on display 110 that have a colorfulness that, when viewed under the current ambient light in the environment around the device, substantially matches the intended colorfulness of the image," paragraph [0068]). Claim 14 Regarding claim 14, Imai et al. teach an electronic device for digital image enhancement on a display of the electronic device ("systems and methods are provided for mitigating physical and/or physiological reductions in the apparent colorfulness of an image displayed on an electronic device display, in various ambient lighting conditions," paragraph [0019]), comprising: a memory; and an image enhancement controller coupled to the memory, and configured to: receive an original image ("an input image 700 may have a representative color gamut 402. It is desired that a viewer, viewing display 110 of device 100, views image 700 with color gamut 402 in any of various ambient light conditions," paragraph [0055]); sense an ambient light ("device 100 includes one or more ambient light sensors, which may be implemented as display- integrated ambient light sensors 113 or ambient light sensors 103 that are separate from the display," paragraph [0022]); generate a virtual content appearance of the original image based on the ambient light and characteristics of the display of the electronic device ("One or more image transformations 716 are performed for input (original) image 700. As shown in FIG. 7, image transformations 716 may include a transformation from International Commission on Illumination (CIE) red green-blue (RGB) values to tristimulus values at block 718, a transformation of the tristimulus values to LMS cone signals (image cone responses) at block 720, and a transformation from the LMS cone signals to IPT values or other perceptually uniform color space color and brightness values," paragraph [0057]) wherein the image enhancement controller is configured to generate the virtual content appearance of the original image by: determining, by the electronic device, an illuminance factor of viewing conditions based on content of the original image, the ambient light and characteristics of the display of the electronic device ("Compensated color gamut 602 is generated by applying compensation factor B to color gamut 600 (e.g., by multiplying the P values of the original image by the P-component, Bp, of factor B and by multiplying the T values of the original image by the T-component, Br, of factor B)," paragraph [0054]); determine a compensating color tone for the original image based on the virtual content appearance ("At block 730, the tristimulus values of the bleached image are also transformed into IPT values for the bleached image," paragraph [0059] where an IPT value is a compensating color tone); modify the original image based on the compensating color tone for the original image ("As shown in FIG. 7, various inverse transformation operations 734 may be applied to the compensated IPT values to generate the compensated image," paragraph [0062]); and display the modified original image for a current viewing condition ("generate images on display 110 that have a colorfulness that, when viewed under the current ambient light in the environment around the device, substantially matches the intended colorfulness of the image," paragraph [0068]). Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach input the illuminance factor of the viewing conditions and the original image into a first artificial intelligence (AI) model("In another example, the high-level block could learn a scaling curve that has the effect of adjusting the global luminance of the image," paragraph [0066] and "A non-limiting embodiment of the high-level block neural network architecture is shown in FIG. 7." paragraph [0058]); generate, by the first Al model, the virtual content appearance of the original image ("The x-axis of the curve is the luminance and the y-axis is the scale factor to apply to pixels to adjust the luminance. This curve boosts the low-luminance pixels by 50 times, leaving the high-luminance pixels alone. The adjustment is performed in Lab space, adjusting the L channel," paragraph [0066]); Imai et al. and Moran et al. are combined as per claim 1. Claim 15 Regarding claim 15, Imai et al. teach the electronic device of claim 14, wherein the image enhancement controller is configured to generate the virtual content appearance of the original image by: estimating an appearance of a color tone of the content in the original image based on the illuminance factor of the viewing conditions ("an input image 700 may have a representative color gamut 402. It is desired that a viewer, viewing display 110 of device 100, views image 700 with color gamut 402 in any of various ambient light conditions," paragraph [0055]). Imai et al. do not explicitly teach all of AI models. However, Moran et al. teach generating the virtual content appearance of the original image based on the estimated appearance of a color tone of the content in the original image using the first Al model ("Then, a high-level network learns the dynamic range correction and tone mapping," paragraph [0008]). Imai et al. and Moran et al. are combined as per claim 1. Claim 16 Regarding claim 16, Imai et al. teach the electronic device of claim 15, wherein the image enhancement controller is configured to determine the illuminance factor of the viewing conditions by: determining a tri-stimulus value of a virtual illuminant of the viewing conditions based on the original image, ambient light data and the characteristics of the display of the electronic device ("At block 730, the spectral power distribution of the ambient light may be determined (if not received from the sensor) and combined (e.g., convolved or integrated) with color matching data 532 and display reflectance data 534 to determine tristimulus values for the ambient light that is reflected by the display," paragraph [0056]); determining a chromaticity co-ordinates for the virtual illuminant of the viewing conditions based on the determined tri-stimulus value ("The color generated by a display such as display 110 may be represented by chromaticity values x and y. The chromaticity values may be computed by transforming, for example, three color intensities ( e.g., intensities of colored light emitted by a display) such as intensities of red, green, and blue light into three tristimulus values," paragraph [0041]); determining a luminance of the virtual illuminant based on a coordinate of the determined tri-stimulus value ("display reflectance data may be measured that describes the color distribution of reflected light 230 under various types of ambient illumination 228 ( e.g., direct sunlight, reflected sunlight, filtered sunlight, polarized sunlight, fluorescent light, incandescent light, firelight, or other forms of ambient light). This display reflectance data may be stored (e.g., in memory of each device 100 or remotely accessible memory) so that, when ambient light is measured by one or more of ambient light sensors 103 and/or 113, the amount, distribution, and color of the portion of that light that is reflected from the display can be determined ( e.g., by looking up or calculating the properties of the reflected light by modifying the measured incident ambient light with the known display reflectance properties in the stored display reflectance data)," paragraph [0039]); and determining the illuminance factor of the viewing conditions based on the tri-stimulus value and the chromaticity co-ordinates for the virtual illuminant of the viewing conditions ("A colorfulness compensation factor, for compensating for the presence of ambient light, may be implemented as a multiplicative color compensation factor "B" to the color channels P and T," paragraph [0053]). Claim 19 Regarding claim 19, Imai et al. teach the electronic device of claim 14, wherein the image enhancement controller is configured to determine the compensating color tone for the original image by: performing a pixel difference calculation of a color tone of the original image and a color tone of the virtual content appearance ("Each layer may be encoded as matrices or vectors of weights expressed in the form of coefficients or constants," paragraph [0054]); generating a color compensation matrix ("In order to correct observed gamut 406 to more closely match intended gamut 402, processing circuitry of device 100 generates and applies a color compensation to the image to be displayed based on the measured ambient light and the known display reflectance properties stored in the display reflectance data," paragraph [0044]) for each of a red (R) channel, a green (G) channel, and a blue (B) channel based on the pixel difference calculation of the color tone of the original image and the color tone of the virtual content appearance ("The ambient light data may be raw channel data from the ambient light sensor or may include processed ambient light data such as a spectral power distribution of the ambient light," paragraph [0056] where channel data is each of a red, green and blue channel); and determining the compensating color tone for the original image based on the color compensation matrix ("ambient light estimation operations may include combining a spectral power distribution 806 determined based on ambient light measurements from one or more ambient light sensors 103/113 with color matching data 532 and display spectral reflectance data 534 (e.g., via a convolution or integration) to form expected reflection data such as reflected light tristimulus values 808 ( denoted XYZR) of a portion of the ambient light that is reflected by the display. However, it should be appreciated that in some scenarios reflected light tristimulus values 808 may be determined directly from one or more channel readings of ambient light sensor(s) 103/113 (e.g., without first computing the spectral power distribution)," paragraph [0064]), wherein the compensating color tone for the original image allows a user to view the original image in an original color tone in the current viewing condition ("generate images on display 110 that have a colorfulness that, when viewed under the current ambient light in the environment around the device, substantially matches the intended colorfulness of the image," paragraph [0068]). Claim 20 Regarding claim 20, Imai et al. teach the electronic device of claim 14, wherein the image enhancement controller is configured to modify the original image by: determining a plurality of pixels in the original image that are impacted by the ambient light based on the virtual content appearance ("may be commons values for all pixels of an image, derived from ratios of the average or median P and T values respectively, or may be determined and applied for each pixel or several groups of pixels," paragraph [0054]); applying the compensating color tone for the original image to each of the plurality of pixels in the original image ("Color gamut 600 may, for example, represent the same color gamut as gamut 402 of FIG. 4, but in the IPT color space. The conversion between the chromaticity space of FIG. 4 and the colorfulness space of FIG. 6 is described in further detail below," paragraph [0054]); and modifying a color tone of content in the original image based on the compensating color tone for the original image ("Color gamut 600 may, for example, represent the same color gamut as gamut 402 of FIG. 4, but in the IPT color space. The conversion between the chromaticity space of FIG. 4 and the colorfulness space of FIG. 6 is described in further detail below," paragraph [0054]). Allowable Subject Matter Claims 4-5, 10 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2022 0020131 A1 to Metzler et al. discloses a method for enhancing images for metrological applications. The method comprises the steps: 1) providing a geometric correction image having an image geometric correctness higher than the processed image geometric correctness and showing at least a part of the scene of interest, and 2) at least partially reducing the loss of initial metrological information in the distorted metrological information by fusing the enhanced image with the geometric correction image. US Patent Publication 2024 0249448 A1 to Kang et al. discloses synthesizing a background and a face by considering a face shape and using a deep learning network are proposed. The method and the device are characterized to receive an input of an original image and a converted face image, remove a central part from the original image, remove edges so that a central part remains in the converted face image, and then extract a feature vector from each image to perform image synthesis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 4 February 2026
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Jun 25, 2025
Interview Requested
Aug 21, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103
Dec 22, 2025
Response after Non-Final Action
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month