DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks page 13, filed 12/09/2025, with respect to the Rejections of claims 1-8 and 10-20 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The Rejections of claims 1-8 and 10-20 have been withdrawn.
Applicant’s arguments, see Remarks page 13-17, filed 12/09/2025, with respect to the Rejections of claims 1-8 and 10-20 under 35 U.S.C. 101 have been fully considered and are persuasive. The Rejections of claims 1-8 and 10-20 have been withdrawn.
Applicant's arguments, see Remarks page 17-21, filed 12/09/2025, with respect to the Rejections of amended claims 1 and 10-11 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
On page 19 of Remarks, Applicant argues:
PNG
media_image1.png
182
756
media_image1.png
Greyscale
Applicant’s arguments with respect to the amended claim 1 limitation “converting the plurality of first blocks of the original image and the plurality of second blocks of the enhanced image into gray images, wherein each block has a corresponding gray image” have been fully considered and are moot in view of the new grounds of rejection (detailed in the rejections below) necessitated by Applicant’s amendment to the claim(s).
On Pages 19-20, Applicant argues:
PNG
media_image2.png
632
750
media_image2.png
Greyscale
Examiner respectfully disagrees.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Section: III. C. Image Quality Assessment Using SSIM Index of Wang discloses “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window…The estimates of local statistics
μ
x
,
σ
x
and
σ
x
y
, and are then modified accordingly as
PNG
media_image3.png
116
333
media_image3.png
Greyscale
,” wherein the calculation of the local patch statistics of each reference and distorted patch constitute calculation of information entropy. In addition, Section: III. C. Image Quality Assessment Using SSIM Index of Wang discloses “For image quality assessment, it is useful to apply the SSIM index locally rather than globally…localized quality measurement can provide a spatially varying quality map of the image, which delivers more information about the quality degradation of the image and may be useful in some applications,” wherein the SSIM index determined based on the comparison of reference and distorted image patches, constitutes an entropy difference measuring a degree of texture loss.
However, Wang fails to disclose the limitations “computing initial gray value distribution information corresponding to each block, wherein the initial gray value distribution information comprises a number of pixels corresponding to each gray value; generating adjusted gray value distribution information corresponding to each block by applying a preset scale window size to the initial gray distribution information corresponding to each block, wherein a number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels corresponding to gray values in the preset scale window size,” wherein the each patch’s information entropy, calculated as first, and second, scale information entropy, are computed based on each patch’s corresponding adjusted gray value distribution information.
Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION of Kim discloses “Next, define the respective probability density functions of the subimages XL and Xu as
PNG
media_image4.png
50
304
media_image4.png
Greyscale
and
PNG
media_image5.png
59
345
media_image5.png
Greyscale
in which nLk and nUk represent the respective numbers of Xk in {X}L and {X}U and nL and nU are the total numbers of samples in {X}L and {X}U, respectively,” wherein the probability density functions for each sub image comprise the number of pixels corresponding to each gray value in the sub image, thus corresponding to an initial gray value distribution information.
In addition, Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION of Kim discloses “Based on these transform functions, the decomposed subimages are equalized independently and the composition of the resulting equalized subimages constitutes the output of the BBHE…where
PNG
media_image6.png
31
330
media_image6.png
Greyscale
,” wherein the sub images are equalized based on the sub image’s adjusted gray value distribution. Thus, constituting a number of pixels corresponding to each gray value the adjusted gray value distribution being a sum of pixels corresponding to gray values in the sub image.
Therefore, as is further disclosed below in the rejection of claim 1 under 35 U.S.C. 103, Wang in view of Kim discloses the limitations “computing initial gray value distribution information corresponding to each block, wherein the initial gray value distribution information comprises a number of pixels corresponding to each gray value; generating adjusted gray value distribution information corresponding to each block by applying a preset scale window size to the initial gray distribution information corresponding to each block, wherein a number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels corresponding to gray values in the preset scale window size; computing a first scale information entropy corresponding to each of the plurality of first blocks of the original image and computing a second scale information entropy corresponding to each of the plurality of second blocks of the enhanced image based on corresponding adjusted gray value distribution information; and computing a degree of visual texture loss of the enhanced image relative to the original image based on a first information entropy difference between the first scale information entropy corresponding to each of the plurality of first blocks of the original image and the second scale information entropy corresponding to each of the plurality of second blocks of the enhanced image.”
As per claim(s) 10-11, arguments made in rejecting claim(s) 1 are analogous.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 10-11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (Image Quality Assessment: From Error Visibility to Structural Similarity) hereinafter referenced as Wang, in view of Fan (CN111899243A) and Kim (Contrast Enhancement Using Brightness Preserving Bi-Histogram Equalization).
Regarding claim 1, Wang discloses: A method for computing a distortion of an enhanced image relative to an original image (Wang: Abstract), comprising: obtaining the original image and the enhanced image, wherein the enhanced image is generated by image enhancement on the original image (Wang: Abstract: “we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.1”; Section: III. STRUCTURAL-SIMILARITY-BASED IMAGE QUALITY ASSESSMENT: “The motivation of our new approach is to find a more direct way to compare the structures of the reference and the distorted signals.”; Wherein the compression of an image constitutes obtaining an enhanced image.);
performing block processing on the original image and the enhanced image separately to generate a plurality of first blocks of the original image and a plurality of second blocks of the enhanced image; applying a preset scale window size (Wang: Section III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window… We use a mean SSIM (MSSIM) index to evaluate the overall image quality…where and are the reference and the distorted images, respectively; xj and yj and are the image contents at the jth local window;”; Wherein the referenced and distorted images are segmented into a plurality of image windows/blocks, and wherein the 8x8 square windows constitute the preset scale window size.);
computing a first information entropy corresponding to each of the plurality of first blocks of the original image and computing a second information entropy corresponding to each of the plurality of second blocks of the enhanced image according to the scale window size (Wang: Section: III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window…The estimates of local statistics
μ
x
,
σ
x
and
σ
x
y
, and are then modified accordingly as
PNG
media_image3.png
116
333
media_image3.png
Greyscale
”; Wherein the calculation of the mean intensity and standard deviation of each reference and distorted patch constitute calculation of information entropy); and
computing a degree of visual texture loss of the enhanced image relative to the original image based on a first information entropy difference between the first information entropy corresponding to each of the plurality of first blocks of the original image and the second information entropy corresponding to each of the plurality of second blocks of the enhanced image (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal. The system separates the task of similarity measurement into three comparisons: luminance, contrast and structure.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”;
Section: III. C. Image Quality Assessment Using SSIM Index: “For image quality assessment, it is useful to apply the SSIM index locally rather than globally…localized quality measurement can provide a spatially varying quality map of the image, which delivers more information about the quality degradation of the image and may be useful in some applications.”; Wherein the application of SSIM on image windows, which measures the similarity, or the difference, between the reference and distorted images based on each window’s mean intensity and signal contrast, is able to determine the quality degradation of the distorted image, which constitutes determining visual texture loss.).
Wang does not disclose expressly: converting the plurality of first blocks of the original image and the plurality of second blocks of the enhanced image into gray images, wherein each block has a corresponding gray image.
Fan discloses: a method for evaluating the clarity of an image (Fan: 0014): wherein the method comprises a grayscale conversion process performed on the captured image prior to the image clarity evaluation (Fan: 0071-0073: “the server obtains an original image, for example, an insurance policy displaying text, an insurance policy displaying a photo, etc. After the server obtains the original image, it performs grayscale processing on the original image to obtain an image to be processed…It should be understood that the original image is an image that has not been processed in any way. The office staff uploads it to the server through scanning, taking photos, etc., and the server obtains it.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the grayscale image processing disclosed by Fan by performing grayscale processing on the original and distorted images disclosed by Wang. The suggestion/motivation for doing so would have been “An original image is acquired, and grayscale conversion processing is performed on the original image to obtain the image to be processed. Therefore, performing grayscale conversion on the original image can improve the efficiency of calculating the clarity evaluation index.” (Fan: 0026-0027; Wherein the grayscale conversion allows for reduction in data processed allowing for improved efficiency.). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Wang in view of Fan does not disclose expressly: computing initial gray value distribution information corresponding to each block, wherein the initial gray value distribution information comprises a number of pixels corresponding to each gray value; generating adjusted gray value distribution information corresponding to each block by applying a preset scale window size to the initial gray distribution information corresponding to each block, wherein a number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels corresponding to gray values in the preset scale window size; computing a first scale information entropy corresponding to each of the plurality of first blocks of the original image and computing a second scale information entropy corresponding to each of the plurality of second blocks of the enhanced image based on corresponding adjusted gray value distribution information; computing a degree of visual texture loss of the enhanced image relative to the original image based on a first information entropy difference between the first scale information entropy corresponding to each of the plurality of first blocks of the original image and the second scale information entropy corresponding to each of the plurality of second blocks of the enhanced image.
Thus, Wang in view of Fan does not disclose expressly: generating an adjusted gray value distribution information corresponding to each image patch, wherein a number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels corresponding to gray values in the image patch; computing the first information and the second information entropies based on corresponding adjusted gray value distribution information, which are then used in computing the degree of visual texture loss.
Kim discloses: a method for segmenting an image into sub images based on the image’s intensity values, and performing independent histogram equalization processes on each sub-image, or block (Kim: Section: 6. Conclusion: “The BBHE is a novel extension of a typical histogram equalization, which utilizes independent histogram equalizations over two subimages obtained by decomposing the input image based on its mean.”). The method comprising: computing initial gray value distribution information corresponding to each block, wherein the initial gray value distribution information comprises a number of pixels corresponding to each gray value (Kim: Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION: “Next, define the respective probability density functions of the subimages XL and Xu as
PNG
media_image4.png
50
304
media_image4.png
Greyscale
and
PNG
media_image5.png
59
345
media_image5.png
Greyscale
in which nLk and nUk represent the respective numbers of Xk in {X}L and {X}U and nL and nU are the total numbers of samples in {X}L and {X}U, respectively.”);
generating adjusted gray value distribution information corresponding to each block by applying a block size to the initial gray distribution information corresponding to each block, wherein a number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels corresponding to gray values in the block (Kim: Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION : “Based on these transform functions, the decomposed subimages are equalized independently and the composition of the resulting equalized subimages constitutes the output of the BBHE…where
PNG
media_image6.png
31
330
media_image6.png
Greyscale
”; Wherein equalization of each sub image based on its respective probability density function constitutes a number of pixels corresponding to each gray value the adjusted gray value distribution being a some of pixels corresponding to gray values in the sub image.)
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique disclosed by Kim of independently performing histogram equalization on sub images by independently performing histogram equalization on each window of the original and distorted images disclosed by Wang in view of Fan prior to calculation the SSIM indices. The suggestion/motivation for doing so would have been “Histogram equalization is widely used for contrast enhancement in a variety of applications due to its simple function and effectiveness.” (Kim: Abstract; Wherein the contrast enhancement allows for the details present in the images to be better distinguishable.). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wang in view of Fan with Kim to obtain the invention as specified in claim 1.
Regarding claim 2, Wang in view of Fan and Kim discloses: The method according to claim 1, further comprising: determining adjusted gray value distribution information corresponding to each of the plurality of first blocks of the original image based on initial gray value distribution information corresponding to each of the plurality of first blocks of the original image and the scale window size (Kim: Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION : “Similar to the case of histogram equalization where a cumulative density function is used as a transform function, let us define the following transform functions exploiting the cumulative density functions…the decomposed subimages are equalized independently and the composition of the resulting equalized subimages constitutes the output of the BBHE.”; Where each window present in the images disclosed by Wang is processed by the sub-image histogram equalization process disclosed by Kim.), and determining the first scale information entropy corresponding to each of the plurality of first blocks based on the adjusted gray value distribution information corresponding to each of the plurality of first blocks (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”;
Section: III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”; Wherein the luminance and signal contrast of each window of the reference image X, constitutes the information entropy corresponding to each of the first blocks);
and determining adjusted gray value distribution information corresponding to each of the plurality of second blocks of the enhanced image based on initial gray value distribution information corresponding to each of the plurality of second blocks of the enhanced image and the scale window size (Kim: Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION : “Similar to the case of histogram equalization where a cumulative density function is used as a transform function, let us define the following transform functions exploiting the cumulative density functions…the decomposed subimages are equalized independently and the composition of the resulting equalized subimages constitutes the output of the BBHE.”; Where each window present in the distorted images disclosed by Wang is processed by the sub-image histogram equalization process disclosed by Kim.), and determining the second scale information entropy corresponding to each of the plurality of second blocks based on the adjusted gray value distribution information corresponding to each of the plurality of second blocks (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”;
Section: III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”; Wherein the luminance and signal contrast of each window of the distorted image Y, constitutes the information entropy corresponding to each of the second blocks),
wherein the number of pixels corresponding to each gray value in the adjusted gray value distribution information is a sum of pixels of each gray value in a target scale window corresponding to the gray value in the initial gray value distribution information (Kim: Section: 3. BRIGHTNESS PRESERVING BI-HISTOGRAM EQUALIZATION : “Based on these transform functions, the decomposed subimages are equalized independently and the composition of the resulting equalized subimages constitutes the output of the BBHE…where
PNG
media_image6.png
31
330
media_image6.png
Greyscale
”; Wherein each pixel in the equalized sub image corresponds to a pixel in the sub image before being equalized.); and a size of the target scale window matches the scale window size that conforms to human visual characteristics (Wang: Section: III. C. Image Quality Assessment Using SSIM Index: “For image quality assessment, it is useful to apply the SSIM index locally rather than globally…In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image.”; Wherein the window disclosed by Wang constitutes the target scale window.).
As per claim(s) 10, arguments made in rejecting claim(s) 1 are analogous. In addition, Sections III.C. Image Quality Assessment Using SSIM Index and IV.B. Test on JPEG and JPEG2000 Image Database of Wang disclose the testing and processing of images from a database using the proposed algorithm, wherein the algorithm is implemented in MATLAB, thus implying the presence of a computer device, comprising a processor, a memory, storing machine-readable instructions, and a bus.
As per claim(s) 11, arguments made in rejecting claim(s) 1 are analogous. In addition, Sections III.C. Image Quality Assessment Using SSIM Index and IV.B. Test on JPEG and JPEG2000 Image Database of Wang disclose the testing and processing of images from a database using the proposed algorithm, wherein the algorithm is implemented in MATLAB, thus implying the presence of a non-transitory computer-readable storage medium, storing a computer program run by a processor.
As per claim(s) 14, arguments made in rejecting claim(s) 2 are analogous.
Claim(s) 4-5, 12, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Fan and Kim, and further in view of He et al. (CN107038699A) hereinafter referenced as He.
Regarding claim 4, Wang in view of Fan and Kim discloses: The method according to claim 1, wherein the method further comprises: determining, based on initial gray value distribution information corresponding to each of the plurality of first blocks of the original image, a first scale information entropy corresponding to each first block, and determining, based on initial gray value distribution information corresponding to each of the plurality of second blocks of the enhanced image, a second scale information entropy corresponding to each second block (Wang: Figure 3; Section: III. C. Image Quality Assessment Using SSIM Index: “For image quality assessment, it is useful to apply the SSIM index locally rather than globally…In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”; Wherein the mean intensity and signal contrast of both reference and distorted image is calculated, after the histogram equalization of each window disclosed by Kim);
and determining a first information entropy difference between the original image and the enhanced image according to the first scale information entropy corresponding to each first block and the second scale information entropy corresponding to each second block; wherein the determining the degree of visual texture loss of the enhanced image based on the first information entropy comprises: determining the degree of visual texture loss of the enhanced image based on the first information entropy difference (Wang: Section: III. C. Image Quality Assessment Using SSIM Index: “For image quality assessment, it is useful to apply the SSIM index locally rather than globally…localized quality measurement can provide a spatially varying quality map of the image, which delivers more information about the quality degradation of the image and may be useful in some applications. In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”).
Wang in view of Fan and Kim does not disclose expressly: determining, based on initial gray value distribution information corresponding to each of the plurality of first blocks of the original image, a first initial information entropy corresponding to each first block, and determining, based on initial gray value distribution information corresponding to each of the plurality of second blocks of the enhanced image, a second initial information entropy corresponding to each second block; and determining a second information entropy difference between the original image and the enhanced image according to the first initial information entropy corresponding to each first block and the second initial information entropy corresponding to each second block; wherein the determining the degree of visual texture loss of the enhanced image based on the first information entropy comprises: determining the degree of visual texture loss of the enhanced image based on the first information entropy difference and the second information entropy difference.
He discloses: a method for determining the distortion of an enhanced image based on the processing of the original and enhanced image to calculate a Total Distortion Rate (TDR) (He: 0010: “Step 1: Perform chromaticity spectrum analysis on the original image and the enhanced image to obtain the chromaticity spectra of the three color components in the original image and the chromaticity spectra of the three color components in the enhanced image”). The TDR is calculated by combining the calculated information distortion rate, component distortion rate, and color distortion rate (He: 0007: “the present invention provides a method for detecting the distortion rate of an enhanced image, which evaluates the total distortion caused by the image enhancement method by calculating the information distortion rate, component distortion rate and color distortion rate of the enhanced image.”; 0024: “Step 3: Calculate the total distortion rate (TDR) of the enhanced image. The total distortion rate (TDR) of the enhanced image is calculated as follows:
PNG
media_image7.png
67
414
media_image7.png
Greyscale
”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of summing information, component and color distortion rates in order to calculate a total distortion rate disclosed by He into Wang in view of Fan and Kim by calculating the quality of the distorted images by adding the SSIMs indices of the corresponding adjusted and initial gray value distribution windows. The suggestion/motivation for doing so would have been “By using the above method, various parameters of the image are obtained by analyzing the image, and the color distortion rate, component distortion rate and information distortion rate of the enhanced image are calculated based on these parameters, thereby calculating the total distortion rate of the enhanced image.” (He: 0026; Wherein the distortion rates, each calculated by performing image processing to extract different distortion information, are able to extract different information for the calculation of the total distortion rate ). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wang in view of Fan and Kim with He to obtain the invention as specified in claim 4.
Regarding claim 5, Wang in view of Fan, Kim, and He discloses: The method according to claim 4, wherein the initial gray value distribution information and the adjusted gray value distribution information are used as target gray value distribution information separately, and a target information entropy is determined according to the following steps, wherein the target information entropy is the first scale information entropy, the second scale information entropy, the first initial information entropy, or the second initial information entropy: using the first blocks and the second blocks as target blocks separately, and for each target block, determining the target information entropy corresponding to the target block based on the number of pixels, indicated by the target gray value distribution information, corresponding to each gray value of the target block, and the total number of pixels corresponding to the target block (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”;
Section: III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”; Wherein the computation of the luminance and signal contrast of each window of the reference and distorted images based on the gray values of each pixel and the total number of pixels in each window, constitutes calculation of the information entropy based on the number of pixels corresponding to each gray value and the total number of pixels corresponding to the target block.).
Regarding claim 12, Wang in view of Fan, Kim, and He discloses: The method according to claim 4, wherein the initial gray value distribution information and the adjusted gray value distribution information are used as target gray value distribution information separately, and a target information entropy is determined according to the following steps, wherein the target information entropy is the first scale information entropy, the second scale information entropy, the first initial information entropy, or the second initial information entropy: using the first blocks and the second blocks as target blocks separately, and for each target block, determining the target information entropy corresponding to the target block based on the number of pixels, indicated by the target gray value distribution information, corresponding to each gray value of the target block, and the total number of pixels corresponding to the target block (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”;
Section: III. C. Image Quality Assessment Using SSIM Index: “In [6] and [7], the local statistics
μ
x
,
σ
x
and
σ
x
y
are computed within a local 8x8 square window, which moves pixel-by-pixel over the entire image. At each step, the local statistics and SSIM index are calculated within the local window”; Wherein the computation of the luminance and signal contrast of each window of the reference and distorted images based on the gray values of each pixel and the total number of pixels in each window, constitutes calculation of the information entropy based on the number of pixels corresponding to each gray value and the total number of pixels corresponding to the target block.).
As per claim(s) 16, arguments made in rejecting claim(s) 4 are analogous.
As per claim(s) 17, arguments made in rejecting claim(s) 5 are analogous.
Claim(s) 7-8 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Fan, Kim, and He, and further in view of Zhang et al. (Reduced-Reference Image Quality Assessment Based on Entropy Differences in DCT Domain) hereinafter referenced as Zhang.
Regarding claim 7, Wang in view of Fan, Kim and He discloses: The method according to claim 4, wherein the determining the degree of visual texture loss of the enhanced image based on the first information entropy difference and the second information entropy difference comprises: determining a union information entropy difference between the corresponding blocks of the enhanced image and the original image based on the first information entropy difference and the second information entropy difference between the corresponding blocks of the enhanced image and the original image (He: 0024: “Step 3: Calculate the total distortion rate (TDR) of the enhanced image. The total distortion rate (TDR) of the enhanced image is calculated as follows:
PNG
media_image7.png
67
414
media_image7.png
Greyscale
”; Wherein the SSIM indices for corresponding windows, as disclosed by Wang, are performed for the adjusted and initial gray value distributions, then added as taught by He.).
Wang in view of Fan, Kim, and He does not disclose expressly: and using a sum of union information entropy differences between the respective corresponding blocks of the enhanced image and the original image as a value to measure the degree of texture loss of the enhanced image.
Zhang discloses: the calculation of an image quality sum by calculating sets of entropy features to represent the referenced and distorted image, wherein the quality score of the distorted image is calculated by calculating the weighted sum of the entropy feature differences between the reference and distorted images (Zhang: Section: II. C. Quality Prediction: “Therefore, we can obtain two sets of entropy features Er and Ed, which represent the features of the reference and distorted image respectively. The difference between the two sets of entropy features is given by the expression of
PNG
media_image8.png
54
578
media_image8.png
Greyscale
…The quality score Q is defined by the weighted sum of the entropy difference as
PNG
media_image9.png
88
502
media_image9.png
Greyscale
“).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to calculate a quality score by adding all the window SSIM scores disclosed by Wang in view of Fan, Kim, and He using a weighted summation as taught by Zhang. The suggestion/motivation for doing so would have been “if the low frequency band has an abrupt change, then the image is contaminated serious. In principle, the weight on low frequency is larger than that on high” (Zhang: Section: II. C. Quality Prediction; Wherein the weight for each component added may be adjusted based its importance or impact to the image distortion.). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wang in view of Fan, Kim, and He with Zhang to obtain the invention as specified in claim 7.
Regarding claim 8, Wang in view of Fan, Kim, He, and Zhang discloses: The method according to claim 7, wherein the determining a union information entropy difference between the corresponding blocks of the enhanced image and the original image based on the first information entropy difference and the second information entropy difference between the corresponding blocks of the enhanced image and the original image comprises: computing a square root of a sum of squares of the first information entropy difference and the second information entropy difference, and using a value of the square root as the union information entropy difference (He: 0024: “Step 3: Calculate the total distortion rate (TDR) of the enhanced image. The total distortion rate (TDR) of the enhanced image is calculated as follows:
PNG
media_image7.png
67
414
media_image7.png
Greyscale
”).
As per claim(s) 19, arguments made in rejecting claim(s) 7 are analogous.
As per claim(s) 20, arguments made in rejecting claim(s) 8 are analogous.
Allowable Subject Matter
Claims 6, 13, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 6, Wang in view of Fan and Kim discloses: The method according to claim 4, wherein the target information entropy difference is the first information entropy difference and the target information entropy difference is determined based on the processed difference of information entropies between the corresponding blocks of the enhanced image and the original image (Wang: Section: III.B. The SSIM Index: “Suppose x and y are two nonnegative image signals, which have been aligned with each other (e.g., spatial patches extracted from each image). If we consider one of the signals to have perfect quality, then the similarity measure can serve as a quantitative measurement of the quality of the second signal. The system separates the task of similarity measurement into three comparisons: luminance, contrast and structure.…First, the luminance of each signal is compared. Assuming discrete signals, this is estimated as the mean intensity…The luminance comparison function is then a function of
μ
x
and
μ
y
…We use the standard deviation (the square root of variance) as an estimate of the signal contrast…The contrast comparison c(x,y) is then the comparison of
σ
x
and
σ
y
.”; Wherein the SSIM algorithm measures the similarity, or the difference, between the reference and distorted images based on the mean intensity and signal contrast extracted from the windows, histogram equalized as disclosed by Kim, in each image.).
Wang in view of Fan and Kim fails to disclose:
wherein a target information entropy difference is determined according to the following steps:
dividing differences of information entropies between corresponding blocks of the enhanced image and the original image into a first class and a second class, wherein the differences of information entropies in the first class are greater than or equal to 0, and the differences of information entropies in the second class are less than 0;
setting the differences of information entropies in the first class to 0, computing a standard deviation of the differences of information entropies in the second class, and determining, based on the standard deviation and the difference of information entropies corresponding to any block in the second class, a standardized difference of information entropies corresponding to the block;
Fan further discloses: a method for determining an image’s clarity (Fan: 0006), by dividing an image into multiple sub-regions in order to calculate a clarity evaluation index for each subregion, and fusing clarity evaluation indices in order to obtain an image clarity evaluation index (Fan: 0014). Wherein prior to fusing the clarity evaluation indices, the sub-regions with an information entropy less than a threshold value, are removed from the image clarity evaluation index calculation (Fan: 0110).
Fan fails to disclose:
wherein a target information entropy difference is determined according to the following steps:
dividing differences of information entropies between corresponding blocks of the enhanced image and the original image into a first class and a second class, wherein the differences of information entropies in the first class are greater than or equal to 0, and the differences of information entropies in the second class are less than 0;
setting the differences of information entropies in the first class to 0, computing a standard deviation of the differences of information entropies in the second class, and determining, based on the standard deviation and the difference of information entropies corresponding to any block in the second class, a standardized difference of information entropies corresponding to the block;
Therefore, claim 6 has been indicated as containing allowable subject-matter.
As per claim(s) 13 and 18, arguments made in indicating claim 6 contains allowable subject-matter are analogous.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY J RODRIGUEZ/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672