DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
Noise Reduction Module in claims 2, 11, and 20;
Color Correction Module in claims 8-9 and 17-18;
Tone-Mapping Module in claims 9 and 18;
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
These modules are interpreted as being one or more of an integrated circuit, a firmware module, and/or processors executing memory stored instructions [0078], and each module is contained within the image signal processing system shown in Fig. 1 and 0028 of the specification.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 6 and 15 are rejected under 35 U.S.C. 112(b) for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 teaches constructing an additional base image pyramid in response to a low-power mode: (a) “receiving an additional image pyramid of an additional image; in response to detecting a low-power mode, convolving a subset of levels of the additional image pyramid with the kernel; constructing, based at least in part on the convolving of the subset of levels of the additional image pyramid, an additional base image pyramid.”
However, claim 6 then teaches processing using the base image pyramid and image from claim 1 and not the additional base image pyramid and additional image: (b) “generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image.” Thus, the additional image and additional pyramids are constructed in response to a low power mode but are not used for any purpose in the claims.
If claim 6 intends to use the additional base image pyramid and additional image in part (b), the Examiner recommends amending part (b) to read “generating a detail layer of the additional image by subtracting, from the additional image, a selected level of the additional base image pyramid from the additional image.”
Claim 15 is rejected for the same reasons as claim 6 stated above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-5, 7-10, 13-14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Breckon et al. (WO 2013/034878 A2), hereafter Breckon, and further in view of Kuang et al. (iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation. 18. 406–414.), hereafter Kuang.
Regarding claim 1, Breckon teaches a circuitry-implemented method comprising:
receiving an image pyramid of an image (Determining an image pyramid, such as a Gaussian pyramid, is well-known in the art. Gaussian pyramids are traditionally made by down sampling, or up sampling, an image, and Breckon teaches performing this method. [Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”);
convolving each level of the image pyramid with a kernel ([Page 4, lines 6-9] “The Gaussian pyramid U comprises n levels, starting with an image U1 as the base with resolution w x h. Successively higher pyramid levels are derived via downsampling of the preceding pyramid level using a 5 x 5 Gaussian filter.” Additionally, Breckon describes the kernel on page 5, lines 10-18.);
constructing, based at least in part on the convolving of each level of the image pyramid, a base image pyramid ([Page 4, lines 10-12] “Un is used as the top level, Dn, of a second Gaussian pyramid D in order to derive its base D1. In this case, lower pyramid levels are derived via upsampling using a 5 x 5 Gaussian filter.”).
Breckon teaches utilizing the Gaussian pyramid and the constructed base pyramid for determining a saliency map rather than a detail layer; thus, Breckon fails to teach generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image.
However, Kuang teaches generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image ([Section 2.2] The detail layer is then achieved by subtracting the base layer image from the original image. Similar to Breckon, Lowe teaches using Gaussian filtering to blur the image and determine a base layer image. Lowe then teaches subtracting a base layer image from the original image to generate a detail layer. See Section 2.2 and Fig. 1.).
Breckon and Kuang are analogous in the art to the claimed invention because both teach methods of decomposing an image into one or more base layers using a gaussian filter for analyzing saliency and contrast for enhancing an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer. This modification would improve Breckon’s invention if one intended to perform further image processing on the image, such as tone mapping and color enhancement. one would preserve the detail layer from losing quality by performing image processing on the original image or just the base image and not the detail image. Then, the detail image can be added back to the processed image or base image ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Regarding claim 4, Breckon and Kuang teach the circuitry-implemented method of claim 1. Breckon further teaches wherein the image pyramid comprises a Gaussian pyramid ([Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”).
Regarding claim 5, Breckon and Kuang teach the circuitry-based method of claim 1. Kuang further teaches wherein the image pyramid comprises a luma pyramid of the image (Kuang teaches separating the luminance information from the image first before using bilateral filtering to determine a base layer image ([Section 2.1] “The input data for iCAM06 model are CIE tristimulus values (XYZ) for the stimulus image or scene in absolute luminance units.” Also, see Fig. 1, which shows separating the RGB input into an XYZ representation prior to processing.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by performing the operation of developing pyramids on the luminance data without the RGB information. This modification would allow for one to target luminance-dependent artifacts or phenomena when determining saliency throughout the image ([Kuang Section 2.2] “The absolute luminance Y of the image data is necessary to predict various luminance-dependent phenomena, such as the Hunt effect and the Stevens effect”).
Regarding claim 7, Breckon and Kuang teach the circuitry-based method of claim 1. Kuang further teaches further comprising generating an enhanced image with an increased level of detail by applying the detail layer to the image (See the combining of the details-layer and the tone-compressed base layer in Fig. 1. This combination results in a detail-combined image.).
As presented above in the rejection to claim 1, the Examiner showed that it would have been obvious to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer, because this modification would implement similar methodology to the well-known iCAM06 algorithm and allow for image processing to occur on only the base layer image to preserve the detail layer. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by generating an enhanced image by re-applying the previously separated detail layer to the image. This modification would apply the details to the image after the base layer receives image processing, so the details can remain preserved and unaffected by tone mapping and color correction ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Regarding claim 8, Breckon and Kuang teach the circuitry-based method of claim 7. Kuang further teaches further comprising transmitting the image to a color correction module that enhances the image by applying color correction to the image (See Section 2.3 outlining the chromatic adaptation module. [Section 2.3] “The base layer image is first processed through the chromatic adaptation…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by applying color correction to the image. This modification would provide further image processing for enhancing human-comprehension of the image, which is a goal of both Kuang and Breckon ([Kuang Section 4] “The goal of the new model is to predict image attributes for complex scenes, producing images that closely resemble the viewer’s perception when standing in the real environment.” [Breckon Page 1, lines 16-17] “In this document, visual saliency is defined as the perceptual quality that makes a group of pixels stand out relative to its neighbors.”).
Regarding claim 9, Breckon and Kuang teach the circuitry-based method of claim 8. Kuang further teaches further comprising transmitting the detail layer directly to a tone-mapping module by bypassing the color correction module (Fig. 1 shows an implementation of the iCAM06 algorithm. A details layer is determined from the image and bypasses the chromatic adaptation and tone compression steps which are applied to the base layer image.), wherein the tone-mapping module:
receives the image as enhanced by the color correction module (See the chromatic adaptation step in Fig. 1 and Section 2.3.);
applies a tone-mapping process to the image to further enhance the image (See the tone compression step in Fig. 1 and Section 2.4.); and
applies the detail layer after the tone-mapping process to further enhance the image (In Fig. 1, the tone-compressed image is combined with the details layer to create a detail-combined image.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by applying color correction and tone mapping to the image before applying the detail layer to further enhance the image. This modification would apply methodology from the well-known iCAM06 image rendering algorithm and would allow for preserving local details in the image since the detail layer is applied after other filtering methods ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Regarding claim 10, Breckon teaches a device comprising: circuitry configured to:
receive an image pyramid of an image (Determining an image pyramid, such as a Gaussian pyramid, is well-known in the art. Gaussian pyramids are traditionally made by down sampling, or up sampling, an image, and Breckon teaches performing this method. [Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”);
convolve each level of the image pyramid with a kernel ([Page 4, lines 6-9] “The Gaussian pyramid U comprises n levels, starting with an image U1 as the base with resolution w x h. Successively higher pyramid levels are derived via downsampling of the preceding pyramid level using a 5 x 5 Gaussian filter.” Additionally, Breckon describes the kernel on page 5, lines 10-18.);
construct, based at least in part on the convolving of each level of the image pyramid, a base image pyramid ([Page 4, lines 10-12] “Un is used as the top level, Dn, of a second Gaussian pyramid D in order to derive its base D1. In this case, lower pyramid levels are derived via upsampling using a 5 x 5 Gaussian filter.”).
Breckon teaches utilizing the Gaussian pyramid and the constructed base pyramid for determining a saliency map rather than a detail layer; thus, Breckon fails to teach circuitry configured to generate a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image.
However, Kuang teaches circuitry configured to generate a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image ([2.2] The detail layer is then achieved by subtracting the base layer image from the original image. Similar to Breckon, Lowe teaches using Gaussian filtering to blur the image and determine a base layer image. Lowe then teaches subtracting a base layer image from the original image to generate a detail layer. See Section 2.2 and Fig. 1.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer. This modification would improve Breckon’s invention if one intended to perform further image processing on the image, such as tone mapping and color enhancement. one would preserve the detail layer from losing quality by performing image processing on the original image or just the base image and not the detail image. Then, the detail image can be added back to the processed image or base image ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Regarding claim 13, Breckon and Kuang teach the device of claim 10. Breckon further teaches wherein the image pyramid comprises a Gaussian pyramid ([Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”).
Regarding claim 14, Breckon and Kuang teach the device of claim 10. Kuang further teaches wherein the image pyramid comprises a luma pyramid of the image (Kuang teaches separating the luminance information from the image first before using bilateral filtering to determine a base layer image ([Section 2.1] “The input data for iCAM06modelareCIEtristimulus values (XYZ) for the stimulus image or scene in absolute luminance units.” Also, see Fig. 1, which shows separating the RGB input into an XYZ representation prior to processing.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by performing the operation of developing pyramids on the luminance data without the RGB information. This modification would allow for one to target luminance-dependent artifacts or phenomena when determining saliency throughout the image ([Kuang Section 2.2] “The absolute luminance Y of the image data is necessary to predict various luminance-dependent phenomena, such as the Hunt effect and the Stevens effect”).
Regarding claim 16, Breckon and Kuang teach the device of claim 10. Kuang further teaches the circuitry being further configured to generate an enhanced image with an increased level of detail by applying the detail layer to the image (See the combining of the details-layer and the tone-compressed base layer in Fig. 1. This combination results in a detail-combined image.).
As presented above in the rejection to claim 10, the Examiner showed that it would have been obvious to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer, because this modification would implement similar methodology to the well-known iCAM06 algorithm and allow for image processing to occur on only the base layer image to preserve the detail layer. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by generating an enhanced image by re-applying the previously separated detail layer to the image. This modification would apply the details to the image after the base layer receives image processing, so the details can remain preserved and unaffected by tone mapping and color correction ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Regarding claim 17, Breckon and Kuang teach the device of claim 16. Kuang further teaches the circuitry being further configured to transmit the image to a color correction module that enhances the image by applying color correction to the image (See Section 2.3 outlining the chromatic adaptation module. [Section 2.3] “The base layer image is first processed through the chromatic adaptation…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by applying color correction to the image. This modification would provide further image processing for enhancing human-comprehension of the image, which is a goal of both Kuang and Breckon ([Kuang Section 4] “The goal of the new model is to predict image attributes for complex scenes, producing images that closely resemble the viewer’s perception when standing in the real environment.” [Breckon Page 1, lines 16-17] “In this document, visual saliency is defined as the perceptual quality that makes a group of pixels stand out relative to its neighbors.”).
Regarding claim 18, Breckon and Kuang teach the device of claim 17. Kuang further teaches the circuitry being further configured to transmit the detail layer directly to a tone-mapping module by bypassing the color correction module (Fig. 1 shows an implementation of the iCAM06 algorithm. A details layer is determined from the image and bypasses the chromatic adaptation and tone compression steps which are applied to the base layer image.), wherein the tone-mapping module:
receives the image as enhanced by the color correction module (See the chromatic adaptation step in Fig. 1 and Section 2.3.);
applies a tone-mapping process to the image to further enhance the image (See the tone compression step in Fig. 1 and Section 2.4.); and
applies the detail layer after the tone-mapping process to further enhance the image (In Fig. 1, the tone-compressed image is combined with the details layer to create a detail-combined image.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by applying color correction and tone mapping to the image before applying the detail layer to further enhance the image. This modification would apply methodology from the well-known iCAM06 image rendering algorithm and would allow for preserving local details in the image since the detail layer is applied after other filtering methods ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Breckon (WO 2013/034878 A2) in view of Kuang et al. (iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation. 18. 406–414.), and further in view of Chang (US 2010/0142790 A1).
Regarding claim 2, Breckon and Kuang teach the circuitry-implemented method of claim 1 but fail to teach wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid.
However, Chang teaches wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid (Fig. 2 shows the steps of first determining image pyramids from the image in step 11. Then, the pyramid layers are used in subsequent noise reduction processing in step 12. Fig. 5 shows the processing steps applied to each pyramid layer.).
Breckon and Chang are analogous in the art because both teach methods of generating an image pyramid from an image and using the layers for performing image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by using the image pyramid during noise reduction and subsequent processing. This modification would allow for noise reduction to be processed on multiple pyramid levels, which would enhance the image at multiple resolutions to further enhance edges and dynamic range ([Chang [0045-0046] “improvement of the visual quality has to be achieved by noise reducing processing of the acquired images... The approach of the present invention aims at performing noise reduction even in presence of image structures by applying locally an adaptive anisotropic filter kernel, i.e. by averaging along edges or lines… The gradients controlling the filter process are derived from the next coarser layers of the Gaussian or Laplacian Pyramid images. In this way, the required smoothing of the gradients is easily achieved.”). Additionally, using the same pyramid without reconstruction across different modules would improve efficiency by avoiding redundancy and wasted computing power; for example, utilizing intra-frame processing with Laplacian pyramids would typically require creating the pyramid(s) and performing noise reduction on each layer before reconstructing into an output image ([Chang 0045] “In this particular case as in all single image acquisition modalities, noise reduction is restricted to intra-frame processing.”).
Regarding claim 11, Breckon and Chang teach the device of claim 10 but fail to teach wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid.
However, Chang teaches wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid (Fig. 2 shows the steps of first determining image pyramids from the image in step 11. Then, the pyramid layers are used in subsequent noise reduction processing in step 12. Fig. 5 shows the processing steps applied to each pyramid layer.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by using the image pyramid during noise reduction and subsequent processing. This modification would allow for noise reduction to be processed on multiple pyramid levels, which would enhance the image at multiple resolutions to further enhance edges and dynamic range ([Chang [0045-0046] “improvement of the visual quality has to be achieved by noise reducing processing of the acquired images... The approach of the present invention aims at performing noise reduction even in presence of image structures by applying locally an adaptive anisotropic filter kernel, i.e. by averaging along edges or lines… The gradients controlling the filter process are derived from the next coarser layers of the Gaussian or Laplacian Pyramid images. In this way, the required smoothing of the gradients is easily achieved.”). Additionally, using the same pyramid without reconstruction across different modules would improve efficiency by avoiding redundancy and wasted computing power; for example, utilizing intra-frame processing with Laplacian pyramids would typically require creating the pyramid(s) and performing noise reduction on each layer before reconstructing into an output image ([Chang 0045] “In this particular case as in all single image acquisition modalities, noise reduction is restricted to intra-frame processing.”).
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Breckon (WO 2013/034878 A2) in view of Kuang et al. (iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation. 18. 406–414.), and further in view of Greenspan et al. (Combining Image-Processing and Image Compression Schemes. NASA TDA Progress Report 42-120. Available Online: https://tmo.jpl.nasa.gov/1990-1999/progress_report/42-120/120G.pdf.), hereafter Greenspan.
Regarding claim 3, Breckon and Kuang teach the circuitry-implemented method of claim 1. Breckon further teaches wherein constructing the base image pyramid comprises:
using a convolved initial level of the image pyramid as an initial level of the base image pyramid ([Page 2, lines 19-20] “starting with data level Un, successively upsampling using a Gaussian filter n-1 times to create a second Gaussian pyramid having a base data level D1;”).
Although Breckon teaches constructing a base image pyramid by using an initial level of the image pyramid and constructing additional levels using upscaled versions of the previous level, Breckon does not mention merging upscaled versions of previous levels of the base pyramid with the corresponding level in the image pyramid. Thus, Breckon and Kuang fail to teach for each successive additional level of the base image pyramid, merging a corresponding convolved level of the image pyramid with an upscaled version of a previous level of the base image pyramid.
However, Greenspan teaches for each successive additional level of the base image pyramid, merging a corresponding convolved level of the image pyramid with an upscaled version of a previous level of the base image pyramid ([Page 58, par. 1] “The reconstruction process entails adding to a given LPF version of the image, GN, the bandpass images, Ln(n = N-1; …; 0), thus reconstructing the Gaussian pyramid, level by level, up to the original input image, G0. This is a recursive process, as in Eq. (4):
Gn = Ln + G(n+1)i; (n = N-1; …; 0) where G(n+1)i is the interpolated version of Gn+1).
Breckon and Greenspan are analogous in the art to the claimed invention, because both teach methods of utilizing image pyramids for determining detailed information from a base image. Therefore, it would have been obvious to one of ordinary skill in the art to improve Breckon’s invention by utilizing Equation 4 taught by Greenspan for constructing the base pyramid, because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Breckon’s method of constructing a base pyramid and Equation 4 of Greenspan perform the same general and predictable function, the predictable function being obtaining a pyramid of lower-level information from a pyramid of higher-level information. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Regarding claim 12, Breckon and Greenspan teach the device of claim 10. Breckon further teaches wherein constructing the base image pyramid comprises: using a convolved initial level of the image pyramid as an initial level of the base image pyramid ([Page 2, lines 19-20] “starting with data level Un, successively upsampling using a Gaussian filter n-1 times to create a second Gaussian pyramid having a base data level D1;”).
Although Breckon teaches constructing a base image pyramid by using an initial level of the image pyramid and constructing additional levels using upscaled versions of the previous level, Breckon does not mention merging upscaled versions of previous levels of the base pyramid with the corresponding level in the image pyramid. Thus, Breckon and Kuang fail to teach for each successive additional level of the base image pyramid, merging a corresponding convolved level of the image pyramid with an upscaled version of a previous level of the base image pyramid.
However, Greenspan teaches for each successive additional level of the base image pyramid, merging a corresponding convolved level of the image pyramid with an upscaled version of a previous level of the base image pyramid ([Page 58, par. 1] “The reconstruction process entails adding to a given LPF version of the image, GN, the bandpass images, Ln(n = N-1; …; 0), thus reconstructing the Gaussian pyramid, level by level, up to the original input image, G0. This is a recursive process, as in Eq. (4):
Gn = Ln + G(n+1)i; (n = N-1; …; 0) where G(n+1)i is the interpolated version of Gn+1).
Breckon and Greenspan are analogous in the art to the claimed invention, because both teach methods of utilizing image pyramids for determining detailed information from a base image. Therefore, it would have been obvious to one of ordinary skill in the art to improve Breckon’s invention by utilizing Equation 4 taught by Greenspan for constructing the base pyramid, because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Breckon’s method of constructing a base pyramid and Equation 4 of Greenspan perform the same general and predictable function, the predictable function being obtaining a pyramid of lower-level information from a pyramid of higher-level information. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claims 6, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Breckon (WO 2013/034878 A2) in view of Kuang et al. (iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation. 18. 406–414.), and further in view of Kalkgruber et al. (US 2022/0377238 A1), hereafter Kalkgruber.
Regarding claim 6, Breckon and Kuang teach the method of claim 1. Breckon further teaches receiving an additional image pyramid of an additional image (Determining an image pyramid, such as a Gaussian pyramid, is well-known in the art. Gaussian pyramids are traditionally made by down sampling, or up sampling, an image, and Breckon teaches performing this method. [Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”);
convolving levels of the additional image pyramid with the kernel ([Page 4, lines 6-9] “The Gaussian pyramid U comprises n levels, starting with an image U1 as the base with resolution w x h. Successively higher pyramid levels are derived via downsampling of the preceding pyramid level using a 5 x 5 Gaussian filter.” Additionally, Breckon describes the kernel on page 5, lines 10-18.);
constructing, based at least in part on the convolving of the subset of levels of the additional image pyramid, an additional base image pyramid ([Page 4, lines 10-12] “Un is used as the top level, Dn, of a second Gaussian pyramid D in order to derive its base D1. In this case, lower pyramid levels are derived via upsampling using a 5 x 5 Gaussian filter.”).
Breckon teaches utilizing the Gaussian pyramid and the constructed base pyramid for determining a saliency map rather than a detail layer; thus, Breckon fails to teach generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image.
However, Kuang teaches generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image ([Section 2.2] The detail layer is then achieved by subtracting the base layer image from the original image. Similar to Breckon, Lowe teaches using Gaussian filtering to blur the image and determine a base layer image. Lowe then teaches subtracting a base layer image from the original image to generate a detail layer. See Section 2.2 and Fig. 1.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer. This modification would improve Breckon’s invention if one intended to perform further image processing on the image, such as tone mapping and color enhancement. one would preserve the detail layer from losing quality by performing image processing on the original image or just the base image and not the detail image. Then, the detail image can be added back to the processed image or base image ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Additionally, although Breckon teaches the method of convolving levels of an image pyramid with the kernel, Breckon fails to teach in response to detecting a low-power mode, convolving a subset of levels of the additional image pyramid with the kernel.
However, Kalkgruber teaches in response to detecting a low-power mode (See the head wearable apparatus of claim 15 including circuitry for low power with limited processing.),
convolving a subset of levels of the additional image pyramid with the kernel (Kalgruber specifically teaches a feature tracking method which tracks features at all levels of a constructed gaussian pyramid, but the algorithm uses only a subset of the image pyramid when needing to conserve power and/or computing resources. [0037] “As a result, one or more of the methodologies described herein facilitate solving the technical problem of power consumption saving by identifying an optimal scale level for an image pyramid process to a current image… Examples of such computing resources include processor cycles, network traffic, memory usage, data storage capacity, power consumption, network bandwidth, and cooling capacity.”).
Breckon and Kalkgruber are analogous in the art to the claimed invention, because both teach methods of downscaling an image to create an image pyramid and using the image pyramid for identifying image features at multiple scales. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon and Kuang’s invention by displaying the enhanced image(s) on a head mounted display. This modification would allow for Breckon and Kuang’s methods to be applied to VR and AR devices, which often require similar saliency-identifying and image enhancement algorithms to provide clear images due to motion blur and head movement ([Kalkgruber 0032-0033] “However, for head-worn devices with built-in cameras, the cameras might be moved rapidly as the user shakes his/her head, causing severe motion blur in the images captured with the built-in cameras. Such rapid motion results in blurred high contrast areas. As a result, the feature detection and matching stage of the visual tracking system is negatively affected, and the overall tracking accuracy of the system suffers. A common strategy to mitigate motion blur is to perform the feature detection and matching on downsampled versions of the source and target image,”). Also, it would have been obvious to one of ordinary skill in the art to limit the amount of pyramid layers which are processed during the low-power mode of a headset, since the algorithm can be accomplished using only one scale of the image ([0037] “As a result, one or more of the methodologies described herein facilitate solving the technical problem of power consumption saving by identifying an optimal scale level for an image pyramid process to a current image.”), similar to the claimed invention which utilizes only one selected layer of the pyramid(s) for generating a detail layer. Kuang also requires selecting only one resolution of base image by bilateral gaussian filtering and subtracting that base image to retrieve a single detail image.
Regarding claim 15, Breckon and Kuang teach the device of claim 10. Breckon further teaches the circuitry being further configured to: receive an additional image pyramid of an additional image (Determining an image pyramid, such as a Gaussian pyramid, is well-known in the art. Gaussian pyramids are traditionally made by down sampling, or up sampling, an image, and Breckon teaches performing this method. [Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”);
convolve levels of the additional image pyramid with the kernel ([Page 4, lines 6-9] “The Gaussian pyramid U comprises n levels, starting with an image U1 as the base with resolution w x h. Successively higher pyramid levels are derived via downsampling of the preceding pyramid level using a 5 x 5 Gaussian filter.” Additionally, Breckon describes the kernel on page 5, lines 10-18.);
construct, based at least in part on the convolving of the subset of levels of the additional image pyramid, an additional base image pyramid ([Page 4, lines 10-12] “Un is used as the top level, Dn, of a second Gaussian pyramid D in order to derive its base D1. In this case, lower pyramid levels are derived via upsampling using a 5 x 5 Gaussian filter.”).
Breckon teaches utilizing the Gaussian pyramid and the constructed base pyramid for determining a saliency map rather than a detail layer; thus, Breckon fails to teach circuitry configured to generating a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image.
However, Kuang teaches circuitry configured to generate a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image ([Section 2.2] The detail layer is then achieved by subtracting the base layer image from the original image. Similar to Breckon, Lowe teaches using Gaussian filtering to blur the image and determine a base layer image. Lowe then teaches subtracting a base layer image from the original image to generate a detail layer. See Section 2.2 and Fig. 1.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer. This modification would improve Breckon’s invention if one intended to perform further image processing on the image, such as tone mapping and color enhancement. one would preserve the detail layer from losing quality by performing image processing on the original image or just the base image and not the detail image. Then, the detail image can be added back to the processed image or base image ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Additionally, although Breckon teaches the method of convolving levels of an image pyramid with the kernel, Breckon fails to teach in response to detecting a low-power mode, convolving a subset of levels of the additional image pyramid with the kernel.
However, Kalkgruber teaches in response to detecting a low-power mode (See the head wearable apparatus of claim 15 including circuitry for low power with limited processing.),
convolve a subset of levels of the additional image pyramid with the kernel (Kalgruber specifically teaches a feature tracking method which tracks features at all levels of a constructed gaussian pyramid, but the algorithm uses only a subset of the image pyramid when needing to conserve power and/or computing resources. [0037] “As a result, one or more of the methodologies described herein facilitate solving the technical problem of power consumption saving by identifying an optimal scale level for an image pyramid process to a current image… Examples of such computing resources include processor cycles, network traffic, memory usage, data storage capacity, power consumption, network bandwidth, and cooling capacity.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon and Kuang’s invention by displaying the enhanced image(s) on a head mounted display. This modification would allow for Breckon and Kuang’s methods to be applied to VR and AR devices, which often require similar saliency-identifying and image enhancement algorithms to provide clear images due to motion blur and head movement ([Kalkgruber 0032-0033] “However, for head-worn devices with built-in cameras, the cameras might be moved rapidly as the user shakes his/her head, causing severe motion blur in the images captured with the built-in cameras. Such rapid motion results in blurred high contrast areas. As a result, the feature detection and matching stage of the visual tracking system is negatively affected, and the overall tracking accuracy of the system suffers. A common strategy to mitigate motion blur is to perform the feature detection and matching on downsampled versions of the source and target image,”). Also, it would have been obvious to one of ordinary skill in the art to limit the amount of pyramid layers which are processed during the low-power mode of a headset, since the algorithm can be accomplished using only one scale of the image ([0037] “As a result, one or more of the methodologies described herein facilitate solving the technical problem of power consumption saving by identifying an optimal scale level for an image pyramid process to a current image.”), similar to the claimed invention which utilizes only one selected layer of the pyramid(s) for generating a detail layer. Kuang also requires selecting only one resolution of base image by bilateral gaussian filtering and subtracting that base image to retrieve a single detail image.
Regarding claim 19, Breckon teaches a system comprising: and circuitry configured to: receive an image pyramid of an image (Determining an image pyramid, such as a Gaussian pyramid, is well-known in the art. Gaussian pyramids are traditionally made by down sampling, or up sampling, an image, and Breckon teaches performing this method. [Page 2, lines 16-17] “…starting with the image data U1, successively down sampling using a Gaussian filter n-1 times to create a first Gaussian pyramid having an nth data level Un…”);
convolve each level of the image pyramid with a kernel ([Page 4, lines 6-9] “The Gaussian pyramid U comprises n levels, starting with an image U1 as the base with resolution w x h. Successively higher pyramid levels are derived via downsampling of the preceding pyramid level using a 5 x 5 Gaussian filter.” Additionally, Breckon describes the kernel on page 5, lines 10-18.);
construct, based at least in part on the convolving of each level of the image pyramid, a base image pyramid ([Page 4, lines 10-12] “Un is used as the top level, Dn, of a second Gaussian pyramid D in order to derive its base D1. In this case, lower pyramid levels are derived via upsampling using a 5 x 5 Gaussian filter.”);
Breckon teaches utilizing the Gaussian pyramid and the constructed base pyramid for determining a saliency map rather than a detail layer; thus, Breckon fails to teach generate a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image, wherein the detail layer is used to generate a display version of the image for the head-mounted display.
However, Kuang teaches circuitry configured to generate a detail layer of the image by subtracting, from the image, a selected level of the base image pyramid from the image ([Section 2.2] The detail layer is then achieved by subtracting the base layer image from the original image. Similar to Breckon, Lowe teaches using Gaussian filtering to blur the image and determine a base layer image. Lowe then teaches subtracting a base layer image from the original image to generate a detail layer. See Section 2.2 and Fig. 1.),
wherein the detail layer is used to generate a display version of the image ([Section 3.2] “…results were displayed on a colorimetric characterized 23-inch Apple Cinema HD LCD Display…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by subtracting one of the generated base images from the original image to generate a detail layer. This modification would improve Breckon’s invention if one intended to perform further image processing on the image, such as tone mapping and color enhancement. one would preserve the detail layer from losing quality by performing image processing on the original image or just the base image and not the detail image. Then, the detail image can be added back to the processed image or base image ([Kuang Section 2.2] “The modules of chromatic adaptation and tone-compression processing are only applied to the base layer, thus preserving details in the image.” See Section 2.2 for a further discussion.).
Additionally, although Kuang teaches using the detail layer to generate a display version of the image, Kaung does not teach that the display is a head mounted device. However, Kalkgruber teaches a head-mounted display ([0020] “FIG. 15 illustrates a network environment in which a head-wearable device can be implemented according to one example embodiment.”).
Breckon and Kalkgruber are analogous in the art to the claimed invention, because both teach methods of downscaling an image to create an image pyramid and using the image pyramid for identifying image features at multiple scales. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon and Kuang’s invention by displaying the enhanced image(s) on a head mounted display. This modification would allow for Breckon and Kuang’s methods to be applied to VR and AR devices, which often require similar saliency-identifying and image enhancement algorithms to provide clear images due to motion blur and head movement ([Kalkgruber 0032-0033] “However, for head-worn devices with built-in cameras, the cameras might be moved rapidly as the user shakes his/her head, causing severe motion blur in the images captured with the built-in cameras. Such rapid motion results in blurred high contrast areas. As a result, the feature detection and matching stage of the visual tracking system is negatively affected, and the overall tracking accuracy of the system suffers. A common strategy to mitigate motion blur is to perform the feature detection and matching on downsampled versions of the source and target image, if matching on the original image resolution fails due to motion blur. While visual information is lost in the downsampled image version, the motion blur is reduced. Thus, feature matching becomes more reliable. Often, images are downsampled multiple times to obtain different resolutions for different severities of motion blur, and the set of all different versions is referred to as an image pyramid.”).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Breckon (WO 2013/034878 A2) in view of Kuang et al. (iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation. 18. 406–414.) and Kalkgruber et al. (US 2022/0377238 A1), and further in view of Chang (US 2010/0142790 A1).
Regarding claim 20, Breckon, Kuang, and Kalgruber teach the system of claim 19. However, all fail to teach wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid.
However, Chang teaches wherein: a noise reduction module previously used the image pyramid to perform a noise reduction operation; and receiving the image pyramid comprises receiving the image pyramid from the noise reduction module without regenerating the image pyramid (Fig. 2 shows the steps of first determining image pyramids from the image in step 11. Then, the pyramid layers are used in subsequent noise reduction processing in step 12. Fig. 5 shows the processing steps applied to each pyramid layer.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Breckon’s invention by using the image pyramid during noise reduction and subsequent processing. This modification would allow for noise reduction to be processed on multiple pyramid levels, which would enhance the image at multiple resolutions to further enhance edges and dynamic range ([Chang [0045-0046] “improvement of the visual quality has to be achieved by noise reducing processing of the acquired images... The approach of the present invention aims at performing noise reduction even in presence of image structures by applying locally an adaptive anisotropic filter kernel, i.e. by averaging along edges or lines… The gradients controlling the filter process are derived from the next coarser layers of the Gaussian or Laplacian Pyramid images. In this way, the required smoothing of the gradients is easily achieved.”). Additionally, using the same pyramid without reconstruction across different modules would improve efficiency by avoiding redundancy and wasted computing power; for example, utilizing intra-frame processing with Laplacian pyramids would typically require creating the pyramid(s) and performing noise reduction on each layer before reconstructing into an output image ([Chang 0045] “In this particular case as in all single image acquisition modalities, noise reduction is restricted to intra-frame processing.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zhai et al. (US 8,639,056 B2) teaches a method for contrast enhancement which involves computing a Laplacian pyramid for an input image, computing a contrast boost image pyramid and applying it to the Laplacian pyramid to produce a contrast-enhanced Laplacian pyramid, and the contrast-enhanced Laplacian pyramid is used to construct a contrast-enhanced image.
Paris et al. (US 8,831,340 B2) teaches a method of tone mapping HDR images. The method involves separating an image into a detail and base layer, performing tone mapping on the base layer, then applying the detail layer to the base layer to obtain an enhanced image without losing details.
Hoppe et al. (US 8,340,415 B2) teaches a method for generating an image pyramid. The method involves obtaining a coarse and a fine image of the same subject, creating an image pyramid from the fine image, and enhancing an image using fine details from the image and color detail from the coarse image.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC JAMES SHOEMAKER whose telephone number is (571)272-6605. The examiner can normally be reached Monday through Friday from 8am to 5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, JENNIFER MEHMOOD, can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Eric Shoemaker/
Patent Examiner
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664