DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Murahashi (US 2015/0187055 A1).
Regarding claim 9, Murahashi discloses:
A correction method of processing executed by a computer (Murahashi, Abstract and ¶70), the processing comprising:
Receiving a modulated video signal (Murahashi, ¶70: antenna receives igh frequency signal as radio wave related to television broadcasting; ¶71: An external image signal is input to the input unit 11. For example, the input unit 11 extracts a modulation signal related to a channel designated from the high frequency signal input through the antenna 10, and converts the extracted modulation signal into a modulation signal of a base frequency band. The input unit 11 outputs the converted modulation signal to the Y/C separating unit 12.);
Separating the modulated video signal into a color difference signal and a luminance signal (Murahashi, ¶72: The Y/C separating unit 12 demodulates the modulation signal input from the input unit 11, generates an image signal, and separates a brightness signal Y, a color-difference signal Cb, and a color-difference signal Cr that are analog signals from the generated image signal)
Further regarding claim 9, the claim recites the contingent limitation, in cases in which a luminance component change value in a given frequency band in a demodulated luminance signal, demodulated from the luminance signal, is a given threshold or greater, perform correction such that the color difference components present in a demodulated color difference signal demodulated from the color difference signal and corresponding to the luminance components are emphasized, compared to cases in which the change value is less than the given threshold. The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met – see MPEP 2111.04(II). As the claim is a method that only performs the contingent limitations when the luminance change value is greater than a threshold, the limitations are not required when that threshold is not met, and the claim is rejected under the prior art based on the required limitations discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 5, 8, 9 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Murahashi (US 2015/0187055 A1) in view of
Miyabata et al. (US 5,418,574).
Regarding claim 1, Murahashi discloses:
A correction device (Murahashi, Fig. 1 and ¶69: display device 1) comprising:
A memory and a processor coupled to the memory, the processor configured to: (Murahashi, ¶344: components of display device 1 including units implemented by a computer, where a program for implementing this control function is recorded in a computer readable recording medium, and the program may be implemented such that the program recorded in the recording medium is read and executed by a computer system – note computer system; ¶345 further discloses a processor)
Receive a modulated video signal (Murahashi, ¶70: antenna receives high frequency signal as radio wave related to television broadcasting; ¶71: An external image signal is input to the input unit 11. For example, the input unit 11 extracts a modulation signal related to a channel designated from the high frequency signal input through the antenna 10, and converts the extracted modulation signal into a modulation signal of a base frequency band. The input unit 11 outputs the converted modulation signal to the Y/C separating unit 12.);
Separate the modulated video signal into a color difference signal and a luminance signal (Murahashi, ¶72: The Y/C separating unit 12 demodulates the modulation signal input from the input unit 11, generates an image signal, and separates a brightness signal Y, a color-difference signal Cb, and a color-difference signal Cr that are analog signals from the generated image signal, and further converting the separated signals from analog signals to digital signals);
In cases in which a luminance component change value in a given frequency band in a demodulated luminance signal, demodulated from the luminance signal, perform correction such that the color difference components present in a demodulated color difference signal demodulated from the color difference signal and corresponding to the luminance components are emphasized,(Murahashi, ¶71: input unit 11 extracts a modulation signal related to a channel designated from the high frequency signal input through the antenna 10, and converts the extracted modulation signal into a modulation signal of a base frequency band; ¶75: The image processing unit 20 outputs an image signal including the brightness signal Y'' and the color-difference signals Cb and Cr to the image format converting unit 14;
¶82: The contour direction estimating unit 21 estimates the contour direction for each pixel based on the signal value (the brightness value) of each pixel. The low pass filter unit 20a filters the signal value of each pixel using the signal value of each reference pixel that is arranged in the contour direction of each pixel estimated by the contour direction estimating unit 21 and in the predetermined reference region from each pixel.
¶¶83-84 discloses using filters to obtain frequency bands of the signal, i.e. “a given frequency band in a demodulated luminance signal”; ¶85: contour direction estimating unit 21 estimates a contour direction of each pixel based on a signal value of each pixel indicated by the brightness signal Y input from the scaling unit; ¶311: sharpen image extending in the tangent direction vertical to contour direction; Figs. 33A-33C and ¶316-317: processing such that boundary between bright region and dark region is clearer)
The only limitation not explicitly taught by Murahashi is the use of thresholding as claimed.
Miyabata discloses:
In cases in which a luminance component change value in a given frequency band in a luminance signal, demodulated from the luminance signal, is a given threshold or greater, perform correction such that the color difference components present in a color difference signal and corresponding to the luminance components are emphasized, compared to cases in which the change value is less than the given threshold (Miyabata, [2:31-53]: a luminance change point detection means to which the luminance signal generated from the stored luminance value is input for detecting the position of a pixel at at least one edge of a variable luminance area wherein the luminance value in a predetermined direction in the image increases or decreases a predetermined amount or more, and a color difference correction means for detecting the variable color difference area wherein the color difference value in a predetermined direction of the image increases or decreases a predetermined amount within the detected stable luminance area, and correcting at least some of the color difference values in the detected variable color difference area and stored in the storage means;
Fig. 11a and [5:10-30] discusses that image boundary between two colors has a distinct change at a boundary; [5:52-6:39]: leading edge detector 2 detects a leading edge of boundary using luminance signal, where at leading edge, the luminance value starts to increase or decrease for two or more continuous pixels and trailing edge where luminance increase or decrease ends;
[7: 26-38]:
Detection of a continuous increase or decrease in the pixel luminance values by the leading edge detector 2 and trailing edge detector 3 is possible by detecting, for example, whether the difference in the luminance values of adjacent pixels is positive or negative and is the same for two or more consecutive pixels, by detecting whether the difference in the luminance values of adjacent pixels is positive or negative and is the same for two or more consecutive pixels and the total difference is greater than a predetermined value, or by detecting whether the difference between the luminance values of consecutive pixels exceeds a predetermined threshold value for two or more consecutive pixels.
Fig. 11a and [22:16-32]: Waveform W2 represents the color difference values for the pixel position of a given luminance value in the source image where k1 and k2 are the ends of the color bleeding area for the color difference value. Waveform W3 shows the results of the process executed by the first embodiment, and waveform W4 shows the results of linear interpolation effected according to the second embodiment to the color difference values between the leading edge n1 and the trailing edge n2 – Note change from source image W2 to image W4 includes a greater emphasis between color boundaries within color difference)
Both Murahashi and Miyrabati are directed to image processing to correct for distortion of image data within signals, to improve the displayed image. It would have been obvious to one of ordinary skill in the art before the effective filing data and with reasonable expectation of success to modify the system and technique for enhancing image data from demodulated data as provided by Murahashi, by using the image correction for bleeding data based on luminance detection within video signals as provided by Miyrabata, using known electronic interfacing and programming techniques. The modification results in an improved image processing system for video signals by accounting for bleeding image data and resulting in an improved image by removing additional data noise or distortion.
Regarding claim 5, Murahashi discloses:
A non-transitory computer readable medium storing a correction program that is executed by a computer to perform processing comprising operations (Murahashi, ¶344: components implemented by a computer, wherein program for implementing this control function is recorded in a computer readable recording medium, and the program may be implemented such that the program recorded in the recording medium is read and executed by a computer system)
Furthermore, the operations perform the same steps as the processor of claim 1 is configured to perform and as such claim 5 is further rejected based on the same rationale as claim 1 set forth above.
Regarding claim 9, the device of claim 1 performs the method of claim 9 and as such claim 9 is rejected based on the same rationale as claim 1 above.
Regarding claim 4, Murahashi further discloses:
Wherein the processor is further configured to output RGB data converted from YUV data configured by combining the demodulated color difference signal including the corrected color difference components with the demodulated luminance signal (Murahashi, ¶75: The image processing unit 20 performs processing related to noise reduction and image sharpening on the brightness signal Y among the image signals input from the scaling unit 13, and generates a brightness signal Z indicating an image obtained by the noise reduction and the sharpening. The image processing unit 20 updates the brightness signal Y input from the scaling unit 13 to the generated brightness signal Z, and synchronizes the brightness signal Z with the color-difference signals Cb and Cr; ¶76: The image format converting unit 14 converts the input image signal or the image signal having the converted format into an image signal (for example, an RGB signal: an image signal including signal values of red (R), green (G), and blue (B) colors) represented by a color system supported by the display unit 15, and outputs the converted image signal to the display unit 15)
Regarding claim 8, the limitations included from claim 5 are rejected based on the same rationale as claim 5 set forth above. Further regarding the claim, the additional operations perform the same steps as the processor of claim 4 is configured to perform and as such claim 8 is further rejected based on the same rationale as claim 8 set forth above.
Regarding claim 12, the device of claim 4 performs the method of claim 12 and as such claim 12 is rejected based on the same rationale as claim 4 above.
Claim(s) 2, 6 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Murahashi (US 2015/0187055 A1) in view of
Miyabata et al. (US 5,418,574) and in further view of
Lin (US 6,628,330 B1).
Regarding claim 2, the limitation included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 2, Lin discloses:
Wherein the processor is configured to abort the correction of the color difference components, in cases in which the luminance component change value is less than a given threshold (Lin, [5:13-25]: Edge enhancer 36 receives the luminance (Y) value from YUV converter 60 for the current pixel, and simply outputs the Y value when no edge is detected. When edge detector 62 detects an edge, edge enhancer 36 multiplies the Y value by a filter to increase or decrease the brightness of the pixel, thereby enhancing or sharpening the edge.)
It would have been obvious to one of ordinary skill in the art before the effective filing data and with reasonable expectation of success to modify the system and technique for enhancing image data from demodulated data as provided by Murahashi, by using the image correction for bleeding data based on luminance detection as provided by Miyrabata, by using the technique of determining whether to perform sharpening based on edge detection as provided by Lin, using known electronic interfacing and programming techniques. The modification results in an improved image processing system by avoiding unnecessary changes to an image when no edge sharpening is required, including more efficiently using limited processing resources.
Regarding claim 6, the limitations included from claim 5 are rejected based on the same rationale as claim 5 set forth above. Further regarding the claim, the additional operations perform the same steps as the processor of claim 2 is configured to perform and as such claim 6 is further rejected based on the same rationale as claim 6 set forth above.
Regarding claim 10, the device of claim 2 performs the method of claim 10 and as such claim 10 is rejected based on the same rationale as claim 2 above.
Claim(s) 3, 7, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Murahashi (US 2015/0187055 A1) in view of
Miyabata et al. (US 5,418,574) and in further view of
Task (US 4,256,368).
Regarding claim 3, the limitation included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 3, the claim appears to merely recite the type of image data that is used is a low light image, which is merely an intended use as opposed to a limitation. In other words, whether the image data has a narrow band of luminance change because the lighting change across an image does not change much, while colors do, this is merely an intended use for the type of image data that the invention otherwise is performed upon and does not dictate any functional aspect to the claim (i.e. the system still performs the image processing based on the image data regardless of the ranges of the data used).
Murahashi discloses the image data having a demodulated luminance signal and a demodulated color difference signal. (Murahashi, ¶72: The Y/C separating unit 12 demodulates the modulation signal input from the input unit 11, generates an image signal, and separates a brightness signal Y, a color-difference signal Cb, and a color-difference signal Cr that are analog signals from the generated image signal)
The only element not taught by the combination of Murahashi modified by Miyabata is that the image data has a particular characteristic, namely the image data has a smaller frequency change in the luminance changes to the image compared to the frequency changes to color differences, namely the particular image characteristics of the data used as opposed to some functional operation by the system itself.
Task discloses:
a frequency band range of the luminance signal is narrower than a frequency band range of the color difference signal (Task, [2:36-48]: variation of colors within a pattern over regions of the CIE color chart, and changes in the pattern frequency while maintaining a constant, uniform level of photometric luminance across the whole test pattern).
It would have been obvious to one of ordinary skill in the art before the effective filing data and with reasonable expectation of success to modify the system and technique for enhancing image data from demodulated data as provided by Murahashi, using the image correction for bleeding data based on luminance detection as provided by Miyrabata, by having image data as provided by Task in place of some other image data, using known electronic interfacing and programming techniques. The modification merely substitutes one known type of image for another, namely an image that varies in luminance and color differently that another, yielding predictable results of performing the same image processing steps on whatever input image is provided. The type of image data was known at the time of effective filing to one of ordinary skill in the art, and it would have been predictable that any known image data having differing color and light properties could be used as input into the image processing system, which performs the same steps as it would otherwise.
Regarding claim 7, the limitations included from claim 5 are rejected based on the same rationale as claim 5 set forth above. Further regarding the claim, the additional operations perform the same steps as the processor of claim 3 is configured to perform and as such claim 7 is further rejected based on the same rationale as claim 7 set forth above.
Regarding claim 11, the device of claim 3 performs the method of claim 11 and as such claim 11 is rejected based on the same rationale as claim 3 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616