DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the arguments submitted by Applicant on 11/21/2025. This action is made non-final.
Examiner’s Notes
The Applicant states that “excerpts from Kurakawa suggest individualized control of the individual luminance for each pixel by an individual current to that pixel, because the language indicates dedicated transistors for each pixel. In contrast, local dimming requires the feature of a plurality of light sources dedicated to a certain pixel region to be similarly dimmed or controlled according to a luminance need for that pixel region in a neighborhood of adjacent other pixel regions in which the plurality of light sources in those regions are similarly dimmed or controlled, but according their own luminance needs.” However, at least the underlined language is not worded as such in the pending claims.
For example, claim 20 states that “and control the light emitting states of the light sources for each area unit on the basis of the local dimming pattern estimated by the trained neural network model, wherein the neural network model is trained on the basis of a target display image and feedback information based on screen-intensity distribution data obtained from a plurality of pixels grouped within each area unit.”
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered.
Double Patenting
The Double Patenting rejection is hereby held in abeyance until other matters regarding patentability are resolved. The Double Patenting rejection previously raised is hereby incorporated by reference.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 20-21, 26-27, 29-30, 33-34, and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Kurokawa et al (Publication number: US 2018/0005588) in view of Ozaki (Publication number: US 2015/0254848).
Consider Claim 20, Kurokawa shows a display device having an artificial intelligence function (see figures 2 and 3), comprising:
(a) Circuitry configured to: estimate a local dimming pattern using a trained neural network model representing light emitting states of light sources for a target display image to be displayed by an image display for video from a broadcasting source (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
(b) The light emitting states of the light sources correspond to a plurality of area units divided from a display area; and control the light emitting states of the light sources for each area unit on the basis of the local dimming pattern estimated by the trained neural network model (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
(c) Wherein the neural network model is trained on the basis of a target display image and feedback information generated by the circuitry (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
Kurokawa does not specifically show that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit.
In the same field of endeavor, Ozaki shows that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit (see paragraphs 3, and 34-36, and figure 7); (A pixel extracting unit that extracts pixels serving as a candidate for a nucleus from pixels included in a captured image obtained by imaging a sample including a target cell having the nucleus; a connected-pixel-group extracting unit that extracts a connected pixel group constituted of a predetermined number of adjacent connected pixels or more from the pixels extracted by the pixel extracting unit. The value indicating the possibility being determined also based on a condition for an image feature amount machine-learned based on a value obtained by aggregating image feature amounts expressing luminance gradients determined based on luminance distribution with respect to a blue component within partial regions).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Ozaki into the teaching of Kurokawa in order to achieve sturdiness against an illumination variation (see Ozaki; paragraph 57).
Consider Claim 26, Kurokawa shows an image processing device (see figures 2 and 3), comprising:
(a) A trained neural network model that estimates a local dimming pattern representing light emitting states of light sources corresponding to a plurality of areas divided from a display area of an image display for a target display image for video from a broadcasting source (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
(b) A control circuitry configured to control the light emitting states of the light sources for each area unit on the basis of the local dimming pattern estimated by the trained neural network model (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
(c) Wherein the neural network model is trained on the basis of an error between a screen intensity distribution based on a target display image input to the neural network model and a screen intensity distribution based on the local dimming pattern estimated by the neural network model (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
Kurokawa does not specifically show that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit.
In the same field of endeavor, Ozaki shows that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit (see paragraphs 3, and 34-36, and figure 7); (A pixel extracting unit that extracts pixels serving as a candidate for a nucleus from pixels included in a captured image obtained by imaging a sample including a target cell having the nucleus; a connected-pixel-group extracting unit that extracts a connected pixel group constituted of a predetermined number of adjacent connected pixels or more from the pixels extracted by the pixel extracting unit. The value indicating the possibility being determined also based on a condition for an image feature amount machine-learned based on a value obtained by aggregating image feature amounts expressing luminance gradients determined based on luminance distribution with respect to a blue component within partial regions).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Ozaki into the teaching of Kurokawa in order to achieve sturdiness against an illumination variation (see Ozaki; paragraph 57).
Consider Claim 33, Kurokawa shows a method for processing an image (see figures 2 and 3), comprising:
(a) Accessing an artificial intelligence unit; training a neural network model of the artificial intelligence unit on the basis of a target display image and feedback information generated by circuitry of a display device (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
(b) Determining a local dimming pattern using the trained neural network model, the local dimming pattern representing light emitting states of light sources for each area unit for the target display image to be displayed by an image display for video from a broadcasting source (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
(c) The light emitting states of the light sources correspond to a plurality of area units divided from a display area; and controlling the light emitting states of the light sources on the basis of the local dimming pattern estimated by the trained neural network model (see paragraphs 135-138); (Supervised learning refers to operation of updating all weight coefficients of a hierarchical neural network on the basis of an output result and a desired result (also referred to as teacher data or a teacher signal in some cases) when the output result and the desired result differ from each other, in functions of the hierarchical neural network. The update amount of a weight coefficient is with respect to the error energy).
Kurokawa does not specifically show that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit.
In the same field of endeavor, Ozaki shows that the screen intensity distribution data is obtained from a plurality of pixels grouped within each area unit (see paragraphs 3, and 34-36, and figure 7); (A pixel extracting unit that extracts pixels serving as a candidate for a nucleus from pixels included in a captured image obtained by imaging a sample including a target cell having the nucleus; a connected-pixel-group extracting unit that extracts a connected pixel group constituted of a predetermined number of adjacent connected pixels or more from the pixels extracted by the pixel extracting unit. The value indicating the possibility being determined also based on a condition for an image feature amount machine-learned based on a value obtained by aggregating image feature amounts expressing luminance gradients determined based on luminance distribution with respect to a blue component within partial regions).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Ozaki into the teaching of Kurokawa in order to achieve sturdiness against an illumination variation (see Ozaki; paragraph 57).
Consider Claims 21 and 34, Kurokawa shows that the feedback information is sensor information sensed by a sensor (see paragraph 357); (Read as optical sensors).
Consider Claim 27, Kurokawa shows that the image display is a liquid crystal image display, and the calculated screen intensity distribution is corrected on the basis of a liquid crystal transmittance of the liquid crystal image display in the training (see paragraph 94).
Consider Claim 37, Kurokawa shows a screen intensity distribution assists in the training of the neural network model (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
Consider Claim 38, Kurokawa shows a liquid crystal image display, wherein the screen intensity distribution is corrected on the basis of a liquid crystal transmittance of the liquid crystal image display in the training (see paragraph 94).
Consider Claims 29 and 30, Kurokawa shows that the trained neural network model is trained to estimate the local dimming pattern for the target display image displayed on the image display and second information, wherein the second information is synchronized with the target display image (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 23-25 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Kurokawa et al (Publication number: US 2018/0005588) in view of Ozaki in view of an official notice taken by the USPTO.
Consider Claims 23 and 36, Kurokawa in view of Ozaki does not specifically show that the circuitry is further configured to access an artificial intelligence server to assist in the use of the trained neural network model. However, the USPTO takes official notice that it is well known and expected in the art that the circuitry is further configured to access an artificial intelligence server to assist in the use of the trained neural network model in order to minimize processing in a display device.
Consider Claim 24, Kurokawa shows a screen intensity distribution assists in the training of the neural network model (see figure 2; paragraphs 93, 94, 101); (The EL correction circuit 164 is provided in the case where the source driver 182 is provided with a current detection circuit that detects current flowing through the light-emitting element 10b. The EL correction circuit 164 has a function of adjusting the luminance of the light-emitting element 10b on the basis of a signal transmitted from the current detection circuit of the source driver 182).
Consider Claim 25, Kurokawa shows a liquid crystal image display, wherein the screen intensity distribution is corrected on the basis of a liquid crystal transmittance of the liquid crystal image display in the training (see paragraph 94).
Claims 22 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Kurokawa et al (Publication number: US 2018/0005588) in view of Ozaki in view of Fredlund (Publication number: US 2018/0227560).
Consider Claims 22 and 35, Kurokawa in view of Ozaki does not specifically show that the target display image is represented by a color space model.
In related art, Fredlund shows that the target display image is represented by a color space model (see paragraph 34).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Fredlund into the teaching of Kurokawa and Ozaki in order to provide metrics for color perception (see paragraph 34).
Claims 28 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Kurokawa et al (Publication number: US 2018/0005588) in view of Ozaki in view of Miyazawa et al (Publication number: US 2018/0204528).
Consider Claims 28 and 39, Kurokawa in view of Ozaki does not specifically show that the trained neural network model is further trained to estimate the local dimming pattern in further consideration of push-up processing of distributing power curbed in a first unit corresponding to a dark part of the display area to a second unit corresponding to a bright part.
In related art, Miyazawa et al shows that the trained neural network model is further trained to estimate the local dimming pattern in further consideration of push-up processing of distributing power curbed in a first unit corresponding to a dark part of the display area to a second unit corresponding to a bright part (see paragraph 177); (Boost-up technology is read as “push-up.” Miyazawa shows distributing electric power saved in a dark region to a high luminance region to intensively emit light).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Miyazawa et al into the teaching of Kurokawa and Ozaki in order to cause output luminance to improve the dynamic range of luminance (see Miyazawa et al; paragraph 179).
Claims 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over Kurokawa et al (Publication number: US 2018/0005588) in view of Ozaki in view of Ikai et al (Publication number: US 2019/0068967).
Consider Claim 31, Kurokawa in view of Ozaki does not specifically show that the second information includes at least one of information for decoding a video signal of the target display image and information for decoding an audio signal synchronized with the video signal.
In related art, Ikai et al shows that the second information includes at least one of information for decoding a video signal of the target display image and information for decoding an audio signal synchronized with the video signal (see paragraphs 137 and 138).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Ikai et al into the teaching of Kurokawa and Ozaki in order to multiple pixel value prediction modes (see Ikai et al; paragraphs 138 and 139).
Consider Claim 32, Kurokawa in view of Ozaki does not specifically show that the second information includes information about content output through the image display.
In related art, Ikai et al shows that the second information includes information about content output through the image display (see paragraphs 137 and 138).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teaching of Ikai et al into the teaching of Kurokawa and Ozaki in order to multiple pixel value prediction modes (see Ikai et al; paragraphs 138 and 139).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL A FARAGALLA whose telephone number is (571)270-1107. The examiner can normally be reached Mon-Fri 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL A FARAGALLA/Primary Examiner, Art Unit 2624 01/14/2026