DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
By preliminary amendment of 11/20/23 claims 21-23 were canceled.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/20/23 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “pixel corrector configured to perform”, “local saturation monitor configured to determine”, and “color distortion restorer configured to output a corrected image patch” in claim 17.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 7-8 & 17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US PG Pub 2011/0254982 to Seo.
Regarding claim 1. Seo discloses an image sensing device (“a method for detecting/correcting a bad pixel in an image sensor”, paragraph 1) comprising:
an image sensor configured to output a raw image (“detecting/correcting a bad pixel in an image sensor”, paragraph 15, a pixel in an image sensor is by definition raw image data) by capturing an image of a subject (“image frames are captured and stored”, paragraph 16); and
an image signal processor (“mage sensor chips”, paragraph 4) configured to
perform a bad pixel correction process on the raw image on an image patch-by-image patch unit (“a method for correcting a bad pixel in an image sensor includes: a first step of creating and storing a plurality of image frames”, paragraph 8, note frame=patch; “a second step of scanning pixels in the image frames; a third step of storing, when a bad pixel is detected in the image frames, the location of the bad pixel as a bad block location in a memory; and a fourth step of correcting the luminance value of the bad block by calling the bad block location stored in the memory in an image capture operation”, paragraph 8),
determine a state of local saturation of an image patch of the raw image based on a number of saturated pixels whose pixel values exceed a local saturation threshold value (“a bad block location stored in a memory is called and a bad block located in an image frame is divided into n*m sub-blocks (S41). A luminance value of the first sub-block among the n*m sub-blocks is detected”, paragraph 37-38, “by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20, high luminance above a threshold=saturation), and
output a corrected image patch, which is obtained by correcting a pixel of the image patch with a corrected pixel value, with a replacement pixel value, or with a raw pixel value depending on the state of local saturation of the image patch (“a correction is made by substituting the luminance value of the current sub-block by the luminance value of the previous sub-block (S46)”, paragraph 43).
Regarding claim 7. Seo discloses to perform the bad pixel correction process on the image patch based on a difference between a current pixel value of a center pixel of the image patch and an ideal center pixel value exceeding a correction threshold value (“it is determined whether the difference ratio of the current sub-block is greater than a predetermined threshold ratio (S44), and it is determined whether the difference between the difference ratio of the previous sub-block and the difference ratio of the current sub-block is greater than a threshold ratio (S45)”, paragraph 42).
Regarding claim 8. Seo discloses an operating method of an image signal processor (“a method for detecting/correcting a bad pixel in an image sensor”, paragraph 1), comprising:
receiving a raw image and image information regarding the raw image from an image sensor (“detecting/correcting a bad pixel in an image sensor”, paragraph 15, a pixel in an image sensor is by definition raw image data, “image frames are captured and stored”, paragraph 16);
generating corrected pixel values by performing bad pixel correction on the raw image on an image patch-by-image patch basis (“a method for correcting a bad pixel in an image sensor includes: a first step of creating and storing a plurality of image frames”, paragraph 8, note frame=patch; “a second step of scanning pixels in the image frames; a third step of storing, when a bad pixel is detected in the image frames, the location of the bad pixel as a bad block location in a memory; and a fourth step of correcting the luminance value of the bad block by calling the bad block location stored in the memory in an image capture operation”, paragraph 8);
determining a state of local saturation of an image patch of the raw image; correcting the image patch with the corrected pixel values, with a replacement pixel value, or with raw pixel values depending on the state of local saturation of the image patch (“a bad block location stored in a memory is called and a bad block located in an image frame is divided into n*m sub-blocks (S41). A luminance value of the first sub-block among the n*m sub-blocks is detected”, paragraph 37-38, “by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20, high luminance above a threshold=saturation); and
outputting the corrected image patch (“a correction is made by substituting the luminance value of the current sub-block by the luminance value of the previous sub-block (S46)”, paragraph 43).
Regarding claim 17. Seo discloses an image signal processor (“a method for detecting/correcting a bad pixel in an image sensor”, paragraph 1) comprising:
a pixel corrector configured to perform a bad pixel correction process on a raw image on an image patch-by-image patch basis (“a method for correcting a bad pixel in an image sensor includes: a first step of creating and storing a plurality of image frames”, paragraph 8, note frame=patch; “a second step of scanning pixels in the image frames; a third step of storing, when a bad pixel is detected in the image frames, the location of the bad pixel as a bad block location in a memory; and a fourth step of correcting the luminance value of the bad block by calling the bad block location stored in the memory in an image capture operation”, paragraph 8);
a local saturation monitor configured to determine a state of local saturation of an image patch of the raw image based on a number of saturated pixels and store location information of the saturated pixels (“a bad block location stored in a memory is called and a bad block located in an image frame is divided into n*m sub-blocks (S41). A luminance value of the first sub-block among the n*m sub-blocks is detected”, paragraph 37-38, “by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20, high luminance above a threshold=saturation); and
a color distortion restorer configured to output a corrected image patch by correcting pixel values of the saturated pixels with a replacement pixel value or with raw pixel values (“a correction is made by substituting the luminance value of the current sub-block by the luminance value of the previous sub-block (S46)”, paragraph 43).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-4, 6, 9-14, 16, & 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seo as applied to claim 1 above, and further in view of US PG Pub 2015/0110372 to Solanki et al.
Regarding claim 2. Seo does not disclose to determine the image patch is not locally saturated based on the number of saturated pixels being equal to or less than a first threshold value, determine the image patch is locally saturated based on the number of saturated pixels being greater than the first threshold value and being equal to or less than a second threshold value, and determine the image patch is burnt based on the number of saturated pixels being greater than the second threshold value.
However, Solanki in the same area of camera digital imaging discloses to determine the image patch is not locally saturated based on the number of saturated pixels being equal to or less than a first threshold value, determine the image patch is locally saturated based on the number of saturated pixels being greater than the first threshold value and being equal to or less than a second threshold value, and determine the image patch is burnt based on the number of saturated pixels being greater than the second threshold value (“In one embodiment, the local saturation measure captures the pixels that have been correctly exposed in a neighborhood, by ignoring pixels that have been under-exposed or over-exposed. The correctly exposed pixels are determined by generating a binary mask M using two empirically estimated thresholds, S.sub.lo for determining under-exposed pixels and S.sub.hi for determining over-exposed pixels. At a pixel location (x, y) the binary mask is determined as: M ( x , y ) = { 1 if S lo < I ( x , y ) < S hi , 0 otherwise . ##EQU00018## The local saturation measure at location (x, y) is then determined as: I Sat ( x , y ) = i , j .di-elect cons. M ( x - i , y - j ) , ##EQU00019## where is a neighborhood of pixels about the location (x,y). In one embodiment, is a circular patch of radius r pixels. In one embodiment, the following values can be used for an 8-bit image: S.sub.lo=40, S.sub.hi=240, r=16. A normalized histogram is then computed over I.sub.Sat to generate the saturation measure descriptors.”, paragraph 247).
Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified a method for correcting a bad pixel in an image sensor to include: to determine the image patch is not locally saturated based on the number of saturated pixels being equal to or less than a first threshold value, determine the image patch is locally saturated based on the number of saturated pixels being greater than the first threshold value and being equal to or less than a second threshold value, and determine the image patch is burnt based on the number of saturated pixels being greater than the second threshold value.
It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified Seo’s method for correcting a bad pixel in an image sensor by the teaching of Solanki because of the following reasons: (a) flag blurry, saturated or under exposed images to be of poor quality and unsuitable, (paragraph 402, Solanki); and (b) providing the accurate detection and complete correction of a bad pixel as taught by Seo at paragraph 6.
Regarding claim 3. Seo discloses wherein the image signal processor is configured to store location information of the saturated pixels based on the image patch being determined as locally saturated or burnt (“When the number of image frames is 3, it can be seen that a total of three image frames are created and stored by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value, as illustrated in FIG. 2(a)”, paragraph 20).
Regarding claim 4. Seo discloses wherein the image signal processor is configured to correct a pixel value with the replacement pixel based on the image patch being determined as locally saturated (“When the number of image frames is 3, it can be seen that a total of three image frames are created and stored by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20) and a center pixel of the image patch being included in the location information of the saturated pixels (Fig. 4).
Regarding claim 6. Seo discloses wherein the image signal processor is configured to correct pixel values of the saturated pixels with corresponding raw pixel values from the image patch (“Then, when an image capture is performed, the location information of bad blocks in the image frame is retrieved from the memory, and the sub-blocks in the corresponding block are corrected. As described above, the correction in the step S26 of FIG. 1 may be performed through the correction algorithm of FIG. 3. However, the block stored as a bad block may be directly substituted by the value of the previous sub-block without calculating the difference ratio of each sub-block. That is, the block determined as a bad block may be corrected by substituting the value of the previous sub-block by the value of the bad block, regardless of the threshold ratio”, paragraph 62-63).
Regarding claim 9. Seo discloses wherein the determining the state of local saturation of the image patch comprises comparing current pixel values of pixels included in the image patch with a local saturation threshold value and determining pixels whose current pixel values exceed the local saturation threshold value as saturated pixels (“a bad block location stored in a memory is called and a bad block located in an image frame is divided into n*m sub-blocks (S41). A luminance value of the first sub-block among the n*m sub-blocks is detected”, paragraph 37-38, “by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20, high luminance above a threshold=saturation).
Regarding claim 10. Solanki discloses wherein the determining the state of local saturation of the image patch further comprises determining that the image patch is locally saturated, based on a number of saturated pixels being greater than a first threshold value, and determining that the image patch is burnt, based on the number of saturated pixels being greater than a second threshold value, which is greater than the first threshold value (“In one embodiment, the local saturation measure captures the pixels that have been correctly exposed in a neighborhood, by ignoring pixels that have been under-exposed or over-exposed. The correctly exposed pixels are determined by generating a binary mask M using two empirically estimated thresholds, S.sub.lo for determining under-exposed pixels and S.sub.hi for determining over-exposed pixels. At a pixel location (x, y) the binary mask is determined as: M ( x , y ) = { 1 if S lo < I ( x , y ) < S hi , 0 otherwise . ##EQU00018## The local saturation measure at location (x, y) is then determined as: I Sat ( x , y ) = i , j .di-elect cons. M ( x - i , y - j ) , ##EQU00019## where is a neighborhood of pixels about the location (x,y). In one embodiment, is a circular patch of radius r pixels. In one embodiment, the following values can be used for an 8-bit image: S.sub.lo=40, S.sub.hi=240, r=16. A normalized histogram is then computed over I.sub.Sat to generate the saturation measure descriptors.”, paragraph 247).
Regarding claim 11. Solanki discloses wherein the determining the state of local saturation of the image patch further comprises determining that the image patch is not locally saturated based on the number of saturated pixels being equal to or less than the first threshold value, and outputting the corrected image patch by correcting the image patch only with the corrected pixel values (“In one embodiment, the local saturation measure captures the pixels that have been correctly exposed in a neighborhood, by ignoring pixels that have been under-exposed or over-exposed. The correctly exposed pixels are determined by generating a binary mask M using two empirically estimated thresholds, S.sub.lo for determining under-exposed pixels and S.sub.hi for determining over-exposed pixels. At a pixel location (x, y) the binary mask is determined as: M ( x , y ) = { 1 if S lo < I ( x , y ) < S hi , 0 otherwise . ##EQU00018## The local saturation measure at location (x, y) is then determined as: I Sat ( x , y ) = i , j .di-elect cons. M ( x - i , y - j ) , ##EQU00019## where is a neighborhood of pixels about the location (x,y). In one embodiment, is a circular patch of radius r pixels. In one embodiment, the following values can be used for an 8-bit image: S.sub.lo=40, S.sub.hi=240, r=16. A normalized histogram is then computed over I.sub.Sat to generate the saturation measure descriptors.”, paragraph 247).
Regarding claim 12. Seo discloses wherein the determining the state of the local saturation of the image patch further comprises, storing location information of the saturated pixels based on the image patch being determined as being locally saturated or as being burnt (“storing, when a bad pixel is detected in the image frames, the location of the bad pixel as a bad block location in a memory; and a fourth step of correcting the luminance value of the bad block by calling the bad block location stored in the memory”, Abstract).
Regarding claim 13. Solanki discloses wherein the image information includes the local saturation threshold value, the first threshold value, and the second threshold value (“In one embodiment, the local saturation measure captures the pixels that have been correctly exposed in a neighborhood, by ignoring pixels that have been under-exposed or over-exposed. The correctly exposed pixels are determined by generating a binary mask M using two empirically estimated thresholds, S.sub.lo for determining under-exposed pixels and S.sub.hi for determining over-exposed pixels. At a pixel location (x, y) the binary mask is determined as: M ( x , y ) = { 1 if S lo < I ( x , y ) < S hi , 0 otherwise . ##EQU00018## The local saturation measure at location (x, y) is then determined as: I Sat ( x , y ) = i , j .di-elect cons. M ( x - i , y - j ) , ##EQU00019## where is a neighborhood of pixels about the location (x,y). In one embodiment, is a circular patch of radius r pixels. In one embodiment, the following values can be used for an 8-bit image: S.sub.lo=40, S.sub.hi=240, r=16. A normalized histogram is then computed over I.sub.Sat to generate the saturation measure descriptors.”, paragraph 247).
Regarding claim 14 Seo discloses wherein the correcting the image patch includes correcting a pixel value of a center pixel of the image patch with the replacement pixel value based (“When the number of image frames is 3, it can be seen that a total of three image frames are created and stored by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20) on the center pixel being included in the stored location information of the saturated pixels (Fig. 4).
Regarding claim 16. Seo discloses wherein the image patch includes restoring pixel values of the saturated pixels to corresponding raw pixel values from the image patch (“Then, when an image capture is performed, the location information of bad blocks in the image frame is retrieved from the memory, and the sub-blocks in the corresponding block are corrected. As described above, the correction in the step S26 of FIG. 1 may be performed through the correction algorithm of FIG. 3. However, the block stored as a bad block may be directly substituted by the value of the previous sub-block without calculating the difference ratio of each sub-block. That is, the block determined as a bad block may be corrected by substituting the value of the previous sub-block by the value of the bad block, regardless of the threshold ratio”, paragraph 62-63) based on the image patch being determined as burnt (“When the number of image frames is 3, it can be seen that a total of three image frames are created and stored by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value, as illustrated in FIG. 2(a)”, paragraph 20).
Regarding claim 18. Seo discloses wherein the local saturation monitor is configured to count a number of saturated pixels in the image patch whose raw pixel values exceed a local saturation threshold value and determine that the image patch is locally saturated, based on the number of saturated pixels being equal to or greater than a first threshold value (“a bad block location stored in a memory is called and a bad block located in an image frame is divided into n*m sub-blocks (S41). A luminance value of the first sub-block among the n*m sub-blocks is detected”, paragraph 37-38, “by capturing bright object images of white, yellow and blue having a luminance value greater than or equal to a threshold value”, paragraph 20, high luminance above a threshold=saturation).
Regarding claim 19. Seo discloses wherein the color distortion restorer is configured to correct a pixel value of a center pixel of the image patch with the replacement pixel value, based on the center pixel of the image patch being a saturated pixel and the image patch is a saturated image patch (“it is determined whether the difference ratio of the current sub-block is greater than a predetermined threshold ratio (S44), and it is determined whether the difference between the difference ratio of the previous sub-block and the difference ratio of the current sub-block is greater than a threshold ratio (S45)”, paragraph 42).
Allowable Subject Matter
Claims 5, 15 & 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pat. 11,308,641 to Porta et al. discloses an apparatus including an interface and a processor. The interface may be configured to receive first video frames of a first field of view captured by a first capture device and second video frames of a second field of view captured by a second capture device. The first capture device and the second capture device may have a symmetrical orientation with respect to a vehicle. The fields of view may have an overlapping region. The processor may be configured to select the capture devices to operate as a stereo pair of cameras based on the symmetrical orientation, receive the video frames from the interface, detect an object located in the overlapping region, perform a comparison operation on the object based on the symmetrical orientation with respect to the video frames and determine a distance of the object from the vehicle in response to the comparison operation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D. WAIT, Esq. whose telephone number is (571)270-5976. The examiner can normally be reached Monday-Friday, 9:30- 6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHRISTOPHER D. WAIT, Esq.
Primary Examiner
Art Unit 2683
/CHRISTOPHER WAIT/Primary Examiner, Art Unit 2683