Prosecution Insights
Last updated: April 19, 2026
Application No. 17/889,988

METHODS AND SYSTEMS FOR LOW LIGHT MEDIA ENHANCEMENT

Final Rejection §103
Filed
Aug 17, 2022
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT KR 2022 008294. Priority to Indian application IN 2021 41026673 with a priority date of 15 June 2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDSs dated 17 August 2022, 23 December 2022, 18 July 2023, 8 November 2023 and 28 February 2024 that have been previously considered remain placed in the application file. Response to Arguments The reply filed on 21 January 2026 has been entered. Applicant’s arguments with respect to claims 1-10 and 15-17 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Claims 11-14 remain objected to. Claims 1-17 are pending in this application and have been considered below. Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 2 recites “at least one of” then listing “noise,” “low brightness,” “artificial flickering,” and “color artifacts.” Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 6-8 and 15-16 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2021 0133943 A1, (Lee et al.) in view of US Patent 8,891,021 B1, (Li et al.). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 1 [AltContent: textbox (Lee et al. Fig. 5, showing luminance adjustment using neural networks.)] PNG media_image1.png 468 441 media_image1.png Greyscale Regarding Claim 1, Lee et al. teach a method for enhancing media ("The present disclosure is directed to addressing an issue associated with some related art in which image processing is not efficient because only one designated neural network is used to perform illumination improvement processing for an image," paragraph [0014]), the method comprising: receiving, by an electronic device, a media stream ("receiving original video data including a plurality of frames," paragraph [0019]); performing, by the electronic device, an alignment of a plurality of frames of the media stream ("For example, it may be assumed that a video having enhanced illumination has an attribute of having a plurality of frames of general illumination images arranged consecutively, followed by a plurality of frames of enhanced ultra-low illumination images arranged consecutively," paragraph [0167] where arranged consecutively is within the broadest interpretation of alignment); correcting, by the electronic device, a brightness of the plurality of frames ("enhance the illumination of all images of low illumination images, ultra-low illumination images, and general illumination images," paragraph [0161]); selecting, by the electronic device, one of a first neural network, a second neural network, or a third neural network ("selecting a first neural network according to the illumination attribute of the first image group and selecting a second neural network according to the illumination attribute of the second image group, among groups of neural networks for image enhancement," paragraph [0019]), by analyzing parameters of the plurality of frames having the corrected brightness ("determining parameters of the artificial neural network by using the training data, to perform tasks such as classification, regression analysis, and clustering of inputted data," paragraph [0073]),; and generating, by the electronic device, an output media stream by processing the plurality of frames of the media stream using the selected one of the first neural network, the second neural network, or the third neural network ("As mentioned above, by classifying an image according to illumination, arranging the classified images by the illumination, and dynamically selecting and utilizing a neural network for image enhancement optimized for each illumination, embodiments of the present disclosure may achieve the illumination improvement of images efficiently and effectively," paragraph [0185]), wherein the selecting, by the electronic device, of the one of the first neural network, the second neural network or the third neural network comprises: selecting the one of the first neural network, the second neural network or the third neural network ("When the images are arranged according to illumination, neural networks for image enhancement may be selected according to the illumination attributes of the arranged images (S130)," paragraph [0156]). Lee et al. is not relied upon to explicitly teach all of shot boundaries. [AltContent: textbox (Li et al. Fig. 2, showing a shot boundary/long strobe detection.)] PNG media_image2.png 414 653 media_image2.png Greyscale However, Li et al. teach wherein the parameters indicate whether at least one of a shot boundary and artificial light flickering is present in the plurality of frames ("Strobes are commonly produced in video. As a cinematic feature, it is often used to signal emotions or as a separator for the transition from one shot to another. Sometimes, strobes are due to physical reasons, such as the video source directly facing a directional light source," Col. 1, lines 50-55 ); making a first determination about whether the shot boundary is present in the plurality of frames based on a temporal similarity between the plurality of frames ("If it is determined that potential strobe frames ended (by previous actions discussed above), all potential strobe frames (in total C of them) will be verified (S828). If all the potential strobe frames are verified (Y at S828), then, encoding component 714 is instructed to encode the frames in a first manner which is optimal for strobe frames (S830)," Col. 6, Lines 54-59, where strobe frames, also called long strobe, are shot boundaries); making a second determination about whether the artificial light flickering is present in the plurality of frames ("Detecting and identifying a strobe within an image frame may be useful for many reasons. For example, image frames having strobes therein may need different encoding. Therefore, if a strobe can be recognized, appropriate encoding resources may be allocated," Col. 1, Lines 59-63); and based on a first result of the first determination and a second result of the second determination ("Luminance component 706 is operable to generate a first luminance value corresponding to a first sectional image data from the buffered sectional frame image data and to generate a second luminance value corresponding to a second sectional frame image data from the buffered sectional frame image data," Col. 8, Lines 43-48). Therefore, taking the teachings of Lee et al. and Li et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Video Data Quality Improving Method and Apparatus” as taught by Lee et al. to use “System and Method of Detecting Strobe using Temporal Window” as taught by Li et al. The suggestion/motivation for doing so would have been that, “Detecting and identifying a strobe within an image frame may be useful for many reasons. For example, image frames having strobes therein may need different encoding. Therefore, if a strobe can be recognized, appropriate encoding resources may be allocated.” as noted by the Li et al. disclosure in Col. 1, lines 59-63, which also motivates combination because the combination would predictably have a higher quality as there is a reasonable expectation that strobes, both between and within frames would need to be corrected; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 15 while noting that the rejection above cites to both device and method disclosures. Claim 15 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Lee et al. teach the method of claim 1, wherein the media stream is captured under low light conditions ("classifying an image according to illumination, and selecting a neural network for image enhancement suitable for the classified image to perform image enhancement," paragraph [0015]), and wherein the media stream comprises at least one of noise, low brightness ("in order to acquire the best image from a low illumination image, it may be desirable to apply a neural network for image enhancement having a higher complexity as the illumination is lower," paragraph [0135]), artificial flickering ("images as a natural video without a flickering phenomenon. Accordingly, when selecting frames continuously arranged for less than a predetermined time among the frames of the first image group," paragraph [0185]), and color artifacts. Claim 6 Regarding claim 6, Lee et al. teach the method of claim 1,wherein the selecting, by the electronic device, one of the first neural network, the second neural network or the third neural network comprises: based on the first determination indicating that the shot boundary is present in the plurality of frames, selecting the first neural network for generating the output media stream by processing the plurality of frames of the media stream ("When the images are arranged according to illumination, neural networks for image enhancement may be selected according to the illumination attributes of the arranged images (S130)," paragraph [0156]); based on the first determination indicating that the shot boundary is not present in the plurality of frames, making the second determination by analyzing a presence of the artificial light flickering in the plurality of frames ("images as a natural video without a flickering phenomenon. Accordingly, when selecting frames continuously arranged for less than a predetermined time among the frames of the first image group," paragraph [0182] where without a flickering phenomenon requires analyzing a presence of flickering); based on the second determination indicating that the artificial light flickering is present in the plurality of frames, selecting the second neural network for generating the output media stream by processing the plurality of frames of the media stream ("Since the low illumination images may be enhanced at a high speed by the low illumination image enhancement method as described above, a video having enhanced illumination may be reproduced in real time (S1600)," paragraph [0184] where the method as described above is the second neural network); and based on the second determination indicating that the artificial light flickering is not present in the plurality of frames, selecting the third neural network for generating the output media stream by processing the plurality of frames of the media stream ("In other words, by classifying and arranging the images forming the video into general illumination images, low illumination images, and ultra-low illumination images according to the illumination attribute of the image, and selecting a neural network for image enhancement suitable for each illumination as classified and arranged, an optimal high illumination image for each illumination may be acquired," paragraph [0186] which teaches using multiple neural networks based on the conditions present in the media stream). Lee et al. is not relied upon to explicitly teach all of shot boundaries. However, Li et al. teach making the first determination by analyzing each frame with respect to earlier frames to determine whether the shot boundary is detected for each of the plurality of frames ("In general, to identify a long strobe, first the differential luminance between a frame i and a frame i-k is determined, where i is the current frame and k is an integer. Then it is determined whether this differential luminance is greater than a predetermined threshold," Col. 6, Lines 24-28). Lee et al. and Li et al. are combined as per claim 1. Claim 7 Regarding claim 7, Lee et al. teach the method of claim 6, wherein the first neural network is a high complexity neural network with one input frame, wherein the second neural network is a temporally guided lower complexity neural network with 'q' number of input frames and a previous output frame for joint deflickering or joint denoising ("As mentioned above, by classifying an image according to illumination, arranging the classified images by the illumination, and dynamically selecting and utilizing a neural network for image enhancement optimized for each illumination, embodiments of the present disclosure may achieve the illumination improvement of images efficiently and effectively," paragraph [0185]), and wherein the third neural network is a neural network with 'p' number of input frames and the previous output frame for denoising, wherein 'p' is less than 'q' ("Thereafter, a selection image may be extracted from the first images included in one group (S1300). According to the present embodiment, for example, thirty first images may be reproduced during 30 seconds while being reproduced as the first images," paragraph [0177] where 30 is less than the number of images used by the low (deep) illumination network). Claim 8 Regarding claim 8, Lee et al. teach the method of claim 7, wherein the first neural network comprises multiple residual blocks at a lowest level for enhancing noise removal capabilities, and wherein the second neural network comprises at least one convolution operation with less feature maps and the previous output frame as a guide for processing a plurality of input frames ("As mentioned above, by classifying an image according to illumination, arranging the classified images by the illumination, and dynamically selecting and utilizing a neural network for image enhancement optimized for each illumination, embodiments of the present disclosure may achieve the illumination improvement of images efficiently and effectively," paragraph [0185]). Claim 15 Regarding claim 15, Lee et al. teach an electronic device ("The present disclosure is directed to addressing an issue associated with some related art in which image processing is not efficient because only one designated neural network is used to perform illumination improvement processing for an image," paragraph [0014])comprising: at least one processor ("Specifically, the processor of the apparatus for improving video Quality'" paragraph [0178]); and a memory configured to store instructions which, when executed by the at least one processor, cause the electronic device to ("Examples of the computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks and DVD-ROM disks; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and execute program codes, such as ROM, RAM, and flash memory devices," paragraph [0188]): receive a media stream ("receiving original video data including a plurality of frames," paragraph [0019]); perform an alignment of a plurality of frames of the media stream ("For example, it may be assumed that a video having enhanced illumination has an attribute of having a plurality of frames of general illumination images arranged consecutively, followed by a plurality of frames of enhanced ultra-low illumination images arranged consecutively," paragraph [0167] where arranged consecutively is within the broadest interpretation of alignment); correct a brightness of the plurality of frames ("enhance the illumination of all images of low illumination images, ultra-low illumination images, and general illumination images," paragraph [0161]); select one of a first neural network, a second neural network, or a third neural network ("selecting a first neural network according to the illumination attribute of the first image group and selecting a second neural network according to the illumination attribute of the second image group, among groups of neural networks for image enhancement," paragraph [0019]), by analyzing parameters of the plurality of frames having the corrected brightness ("determining parameters of the artificial neural network by using the training data, to perform tasks such as classification, regression analysis, and clustering of inputted data," paragraph [0073]); and generate an output media stream by processing the plurality of frames of the media stream using the selected one of the first neural network, the second neural network, or the third neural network ("As mentioned above, by classifying an image according to illumination, arranging the classified images by the illumination, and dynamically selecting and utilizing a neural network for image enhancement optimized for each illumination, embodiments of the present disclosure may achieve the illumination improvement of images efficiently and effectively," paragraph [0185]), wherein the selecting, by the electronic device, of the one of the first neural network, the second neural network or the third neural network comprises: select one of the first neural network, the second neural network or the third neural network ("When the images are arranged according to illumination, neural networks for image enhancement may be selected according to the illumination attributes of the arranged images (S130)," paragraph [0156]). Lee et al. is not relied upon to explicitly teach all of shot boundaries. However, Li et al. teach wherein the parameters indicate whether at least one of a shot boundary and artificial light flickering is present in the plurality of frames ("Strobes are commonly produced in video. As a cinematic feature, it is often used to signal emotions or as a separator for the transition from one shot to another. Sometimes, strobes are due to physical reasons, such as the video source directly facing a directional light source," Col. 1, lines 50-55 ); make a first determination about whether the shot boundary is present in the plurality of frames based on a temporal similarity between the plurality of frames ("If it is determined that potential strobe frames ended (by previous actions discussed above), all potential strobe frames (in total C of them) will be verified (S828). If all the potential strobe frames are verified (Y at S828), then, encoding component 714 is instructed to encode the frames in a first manner which is optimal for strobe frames (S830)," Col. 6, Lines 54-59, where strobe frames, also called long strobe, are shot boundaries); make a second determination about whether the artificial light flickering is present in the plurality of frames ("Detecting and identifying a strobe within an image frame may be useful for many reasons. For example, image frames having strobes therein may need different encoding. Therefore, if a strobe can be recognized, appropriate encoding resources may be allocated," Col. 1, Lines 59-63); and based on a first result of the first determination and a second result of the second determination ("Luminance component 706 is operable to generate a first luminance value corresponding to a first sectional image data from the buffered sectional frame image data and to generate a second luminance value corresponding to a second sectional frame image data from the buffered sectional frame image data," Col. 8, Lines 43-48). Lee et al. and Li et al. are combined as per claim 1. Claim 16 Regarding claim 16, Lee et al. teach the method of claim 1, further comprising determining whether the artificial light flickering is present in the plurality of frames, wherein the selecting of the one of the first neural network, the second neural network, or the third neural network is performed based on a result of the determining ("When the images are arranged according to illumination, neural networks for image enhancement may be selected according to the illumination attributes of the arranged images (S130)," paragraph [0156]). 2nd Claim Rejections - 35 USC § 103 Claims 3, 9-10 and 17 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2021 0133943 A1, (Lee et al.) and US Patent 8,891,021 B1, (Li et al.) in view of Non Patent Publication “Seeing Motion in the Dark,” (Chen et al.). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 3 Regarding Claim 3, Lee et al. and Li et al. teach the method of claim 1, as noted above. Lee et al. and Li et al. are not relied upon to explicitly teach all of denoising. However, Chen et al. teach wherein the output media stream is a denoised media stream with enhanced brightness and zero flicker ("We proposed a siamese network that preserves color while significantly suppressing spatial and temporal artifacts," page 3191, paragraph 4, where preserves color is enhanced brightness and suppressed spatial and temporal artifacts is zero flicker). Therefore, taking the teachings of Lee et al. and Li et al. and Chen et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Video Data Quality Improving Method and Apparatus” as taught by Lee et al. and “System and Method of Detecting Strobe using Temporal Window” as taught by Li et al. to use “Seeing Motion in the Dark” as taught by Chen et al. The suggestion/motivation for doing so would have been that, “Deep learning has recently been applied with impressive results to extreme low-light imaging. Despite the success of single-image processing, extreme low-light video processing is still intractable due to the difficulty of collecting raw video data with corresponding ground truth.” as noted by the Chen et al. disclosure in the Abstract, which also motivates combination because the combination would predictably have a higher efficiency as there is a reasonable expectation that luminance adjustments will be required in low light situations; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 9 Regarding claim 9, Lee et al. and Li et al. teach the method of claim 6, as noted above. Lee et al. and Li et al. are not relied upon to explicitly teach all of Siamese training. However, Chen et al. teach wherein the first neural network, the second neural network and the third neural network are trained using a multi-frame Siamese training method to generate the output media stream by processing the plurality of frames of the media stream ("The proposed method involves training a deep siamese network [6] with a specially designed loss that encourages temporal stability," page 3184, last paragraph). Lee et al. and Li et al. and Chen et al. are combined as per claim 3. Claim 10 Regarding claim 10, Lee et al. and Li et al. teach the method of claim 9, further comprising training a neural network of at least one of the first neural network, the second neural network and the third neural network by: creating a dataset for training the neural network, wherein the dataset comprises one of a local dataset and a global dataset ("As described above, the training system 300 may generate a neural network group for image enhancement for each image illumination, in which neural networks trained for optimal illumination improvement in processing time and processing performance for each image illumination are included," paragraph [0178]). Lee et al. and Li et al. are not relied upon to explicitly teach all of Siamese training. However, Chen et al. teach selecting at least two sets of frames from the created dataset, wherein each set comprises at least three frames ("To simulate synthetic noise for comparison, we use the same sampling strategies for σr and σs as [35]," page 3186 paragraph [04] where the referenced sampling strategy includes using more than 3 frames to which noise is added); adding a synthetic motion to the selected at least two sets of frames, wherein the at least two sets of frames added with the synthetic motion comprise different noise realizations ("We analyze the noise distribution in the DRV dataset and compare it with a synthetic noise model used in recent work [35]," page 3186 paragraph [04]); and performing a Siamese training of the neural network using a ground truth media and the at least two sets of frames added with the synthetic motion ("The proposed method involves training a deep siamese network [6] with a specially designed loss that encourages temporal stability," page 3184, last paragraph). Lee et al. and Li et al. and Chen et al. are combined as per claim 3. Claim 17 Regarding claim 17, Lee et al. and Li et al. teach the method of claim 1, as noted above. Lee et al. and Li et al. are not relied upon to explicitly teach all of synthetic trajectory generation. However, Chen et al. teach wherein at least one of the first neural network, the second neural network and the third neural network is trained based on a dataset that is created using synthetic trajectory generation ("To simulate synthetic noise for comparison, we use the same sampling strategies for σr and σs as [35]," page 3186 paragraph [04] where the referenced sampling strategy is used for training). Lee et al. and Li et al. and Chen et al. are combined as per claim 3. 3rd Claim Rejections - 35 USC § 103 Claims 4 and 5 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2021 0133943 A1, (Lee et al.), US Patent 8,891,021 B1, (Li et al.) and Patent Publication Seeing Motion in the Dark, (Chen et al.) in view of US Patent Publication 2007 0286523 A1, (Kim et al.). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 4 Regarding Claim 4, Lee et al. teach the method of claim 1, wherein the correcting the brightness of the plurality of frames of the media stream comprises: identifying a single frame or the plurality of frames of the media stream as an input frame ("When the images are arranged according to illumination, neural networks for image enhancement may be selected according to the illumination attributes of the arranged images (S130)," paragraph [0156]); selecting a brightness multiplication factor for correcting the brightness of the input frame using a future temporal guidance ("receiving original video data including a plurality of frames, classifying the frames into at least a first image group and a second image group according to illumination for image enhancement processing of the original video data," paragraph [0019]). Lee et al. and Li et al. are not relied upon to explicitly teach all of camera response function. However, Chen et al. teach applying a linear boost on the input frame based on the brightness multiplication factor ("The pixel values are linearly scaled based on the exposure value (EV) difference and clipped to match the brightness and dynamic range of the ground truth," page 3184 paragraph [01]); and applying a Camera Response Function (CRF) on the input frame to correct the brightness of the input frame, wherein the CRF is a function of a sensor type and metadata ("The low-light data is linearized by first subtracting the black level and then applied the digital gain," page 3186 paragraph [05] Where subtracting the black level is a camera response function based on the black level), wherein the metadata comprises an exposure value and an International Standard Organization (ISO) value ("The exposure differences between the raw low-light input and the long-exposure ground truth in the static set are between factors of 120 and 300. We apply digital gains on the low-light raw frames in preprocessing based on these exposure ratios," page 3186 paragraph [03] where the factors between 120 and 300 are ISO exposure values), and wherein the CRF and the ICRF are stored as Look-up-tables (LUTs) ("we train the kernel prediction network (KPN) [35] for spatial and temporal denoising with default settings using the author-provided code," page 3188 paragraph [03] where the default settings are in a look up table). Lee et al., Li et al. and Chen et al. are not relied upon to explicitly teach all of inverse camera response function. However, Kim et al. teach linearizing the input frame using an Inverse Camera Response Function (ICRF) ("Meanwhile, if the input image belongs to a bright image range, the intensity mapping unit 120 generates the input image as a plurality of images using an inverse function of the intensity mapping function," paragraph [0043]). Therefore, taking the teachings of Lee et al., Li et al., Chen et al. and Kim et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Video Data Quality Improving Method and Apparatus” as taught by Lee et al., “Imaging Device, Signal Processing Method, Program, and imaging Apparatus” as taught by Miyatani and “Seeing Motion in the Dark” as taught by Chen et al. to use “Image Processing Method and Apparatus for Contrast Enhancement as taught by Kim et al. The suggestion/motivation for doing so would have been that, “Related art CE methods include histogram equalization (HE) and gamma correction. The HE method enhances a contrast using a probability density function (pdf) of an image as a mapping function, when an image having a low contrast exists due to an imbalance in a brightness distribution of pixels.” as noted by the Kim et al. disclosure in paragraph [0007], which also motivates combination because the combination would predictably have a higher adaptability as there is a reasonable expectation that luminance correction will need to be adjusted in many environments; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 5 Regarding claim 5, Lee et al. teach the method of claim 4, wherein the selecting the brightness multiplication factor includes: analyzing the brightness of the input frame ("classifying an image according to illumination, and selecting a neural network for image enhancement suitable for the classified image to perform image enhancement," paragraph [0015]). Lee et al. and Li et al. are not relied upon to explicitly teach all of brightness multiplication factor. However, Chen et al. teach identifying a maximum constant boost value as the brightness multiplication factor, based on the brightness of the input frame being less than a threshold and a brightness of all frames in a future temporal buffer being less than the threshold ("The pixel values are linearly scaled based on the exposure value (EV) difference and clipped to match the brightness and dynamic range of the ground truth," page 3184 paragraph [01]). Lee et al., Li et al. and Chen et al. are not relied upon to explicitly teach all of boost values of monotonically decreasing function. However, Kim et al. teach identifying a boost value of monotonically decreasing function between maximum constant boost value and 1 as the brightness multiplication factor, based on the brightness of the input frame being less than the threshold, and the brightness of all the frames in the future temporal buffer being greater than the threshold ("The determination unit 110 determines whether to perform contrast enhancement processing, according to whether an average brightness value of an input image is within a predetermined brightness range," paragraph [0037]); identifying a unit gain boost value as the brightness multiplication factor, based on the brightness of the input frame being greater than the threshold and the brightness of all the frames in the future temporal buffer being greater than the threshold ("The determination unit 110 determines whether to perform contrast enhancement processing, according to whether an average brightness value of an input image is within a predetermined brightness range," paragraph [0037]); and identifying a boost value of monotonically increasing function between 1 and the maximum constant boost value as the brightness multiplication factor, based on the brightness of the input frame being greater than the threshold, and the brightness of the frames in the future temporal buffer being less than the threshold ("The determination unit 110 determines whether to perform contrast enhancement processing, according to whether an average brightness value of an input image is within a predetermined brightness range," paragraph [0037]). Lee et al., Li et al., Chen et al. and Kim et al. are combined as per claim 4. Allowable Subject Matter Claims 11-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2019 0304068 A1 to Vogels et al. discloses denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. US Patent Publication 2022 0121878 A1 to Butler et al. discloses obtaining first and second pluralities of images of a subject bearing as associated imaging label, identifying locations of the imaging label within the first plurality of images, and using the identified locations to generate a plurality of labeled images based upon the second plurality of images. In this manner, a large collection of labeled images may be collected without the need for manual labeling by a human actor. This large collection of labeled images may then form the training set for training a ML system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.E.W/Examiner, Art Unit 2664 Date: 18 March 2026 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Aug 17, 2022
Application Filed
Nov 20, 2024
Non-Final Rejection — §103
Jan 09, 2025
Interview Requested
Jan 23, 2025
Applicant Interview (Telephonic)
Jan 23, 2025
Examiner Interview Summary
Feb 28, 2025
Response Filed
Apr 17, 2025
Final Rejection — §103
Aug 01, 2025
Request for Continued Examination
Aug 05, 2025
Response after Non-Final Action
Oct 16, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month