Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Alakuijala et al. (US 2018/0316927 A1; further referred to as Alakuijala) in view of Mlinar (US 2015/0381869 A1).
4. Regarding claim 1, an apparatus (…Alakuijala, in [0016], teaches electronic environment 100; Fig. 1…), comprising:
at least one memory (…wherein [0020] teaches memory 126; Fig. 1…); and
at least one processor coupled to the at least one memory (…[0020-21] teaches one or more of the components of a color image processing computer 120 including processors (e.g., processing units 124 ), wherein processing units 124 and memory 126 together form a control circuitry; further, [0022] teaches a color image manager 130 which is part of element 120; Fig. 1…),
the at least one processor configured to:
obtain image information from an image sensor including an array of pixels (…wherein [0018] teaches element 112 being configured to receive color image data which may be represented as pixels in an array…).
Alakuijala does not teach the array of pixels including:
the array of pixels including focus pixels;
obtain phase detection (PD) pixel information from the focus pixels of the image sensor.
However, Mlinar teaches image processing methods for image sensors with phase detection pixels to include:
the array of pixels including focus pixels (…wherein Mlinar, in [0045], with regards to Figs. 6-13, teaches pixel arrays including phase detection pixel arrangements, wherein as taught in [0088] phase detection pixel signals can be used in autofocus operations…);
obtain phase detection (PD) pixel information from the focus pixels of the image sensor (…wherein [0069] teaches a processing circuitry may determine if imaged objects in a scene are in focus using information of phase detection pixel outputs; further, [0070] teaches the processing circuit may perform a first cross-correlation algorithm using pixel signals from red phase detection pixels in the row and a second cross-correlation algorithm using pixel signals from green phase detection pixels in the row, wherewith red data cross-correlation results may be merged with the green data cross-correlation results to determine a phase difference.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that phase detection pixels included in an image sensor (pixel array), as taught by Mlinar, could have been implemented in the array of pixels as taught by Alakuijala so to obtain phase difference data between different colors…);
Alakuijala in view of Mlinar further teaches the at least one processor configured to:
generate a reference focus map based on PD pixel information for a first color channel (…Alakuijala, in [0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein a nominal image (reference image, 310(G)) is represented in the green channel…);
generate a first focus map based on PD pixel information for a second color channel (…Alakuijala, in [0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein image 310(R) is represented in the red channel (outlined by a dashed edge)…);
align the first focus map to the reference focus map to detect a misalignment between the reference focus map and the first focus map (…wherein Fig. 3A can further be viewed as a focus map showing the displacement of between a nominal reference image of a color channel with another image of another color channel…);
determine a magnitude of the misalignment (…wherein [0044] teaches that a displacement value can be represented by a vector of horizontal and vertical components; thus a magnitude and direction of displacement can be determined…); and
apply chromatic aberration correction to the image information based on the detected misalignment and the magnitude of the misalignment (…wherein [0039] teaches element 120 performing chromatic aberration correction with reduced separation between the edge of the first and second image…).
5. Regarding claim 2, Alakuijala in view of Mlinar the apparatus of claim 1 (see claim 1 above), wherein,
to apply the chromatic aberration correction, the at least one processor is further configured to sharpen the second color channel based on the magnitude of the misalignment (…wherein [0003] teaches that a chromatic aberration corrected image has reduced separation between the edge of a first and second image; [0015] teaches that the chromatic aberration of an imaging system may be represented as a vector displacement map. As such a vector is known to possess a magnitude and a direction. Therefore, it may be said that a displacement (misalignment) is in part identified by magnitude. [0039] further teaches chromatic aberration represented as a vector displacement between an edge of a green image and an edge of a red image; wherein [0041] further states that the green image (channel) is considered to be unaffected by chromatic aberration. Thus, the reduced separation, as taught in [0003], is viewed as a sharpening or focusing eventuality that takes place as a result of an image edge (e.g., red) being corrected in accordance with a nominal (reference) image (e.g., green…).
6. Regarding claim 3, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), wherein,
the at least one processor is further configured to generate a shift map based on an alignment between pixels of the first focus map and the reference focus map (…wherein [0039] teaches that chromatic aberration may be represented as a vector displacement between the edge of a green image and an edge of a red image; [0015] further teaches the chromatic aberration of an imaging system may be represented as a vector displacement map between a red channel and a green channel of a color image, a blue channel and a green channel of the color image, or both…).
7. Regarding claim 4, Alakuijala in view of Mlinar teaches the apparatus of claim 3 (see claim 3 above), wherein,
to apply the chromatic aberration correction, the at least one processor is further configured to warp the second color channel based on the shift map (…wherein [0039] teaches a reduced separation between the edges of the first and second image, [0041] teaches the green channel as the nominal image (no chromatic aberration). Thus, it is apparent that the second image is adjusted in accordance with the nominal image…).
8. Regarding claim 5, Alakuijala in view of Mlinar teaches the apparatus of claim 4 (see claim 4 above), wherein,
the at least one processor is configured to warp the second color channel on a per pixel basis (…wherein Alakuijala, in [0043], teaches that each displacement datum is a single value within a region defined by a pixel, the teaching of a reduced separation between the edges of the first and second image would be in accordance with what is taught in [0043]…).
9. Regarding claim 6, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), wherein
the at least one processor is further configured to generate focus maps based on a difference in light intensity between two or more photodiodes of one or more focus pixels (…wherein Mlinar, in [0084], teaches that processing circuitry 16 may determine a brightness gradient associated with an edge at phase detection pixels; wherein a gradient is held to be true in a condition wherein brightness gradient along a horizontal and vertical direction difference with regards to the phase pixels are met in accordance with the formulas in [0084]. [0085] further teaches the determination the edge brightness gradient at pair of phase pixels using nearby pixels, as part of gathered data to generate focus information; Fig. 15.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that phase detection pixels included in an image sensor (pixel array), as taught by Mlinar, could have been implemented in the array of pixels as taught by Alakuijala so to obtain phase difference data between different colors…).
10. Regarding claim 7, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), wherein
the at least one processor is further configured to align the first focus map to the reference focus map based on a focused area (…[0003] teaches that a chromatic aberration corrected image has reduced separation between the edge of a first and second image. As such, [0051] teaches a processing unit adds the displacement maps to a respective image (“e.g., the blurred, weighted green-red displacement map to the green image”) so to produce a corrected color image (viewed as an alignment); wherein the L-shaped object as depicted in Fig. 3A (Alakuijala) may be viewed as a focused area within image 300…).
11. Regarding claim 8, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), wherein
the first color channel comprises a color channel used for focusing (…Alakuijala, in [0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein a nominal image (reference image, 310(G)) is represented in the green channel…).
12. Regarding claim 9, Alakuijala in view of Mlinar teaches the apparatus of claim 8 (see claim 8 above), wherein
the first color channel comprises a green color channel (…Alakuijala, in [0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein a nominal image (reference image, 310(G)) is represented in the green channel…).
13. Regarding claim 10, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), wherein
the second color channel comprises one or a red or blue color channel (…Alakuijala, in [0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein image 310(R) is represented in the red channel (outlined by a dashed edge)…).
14. Regarding claim 11, Alakuijala in view of Mlinar teaches the apparatus of claim 1 (see claim 1 above), further comprising the image sensor (…wherein Alakuijala, in [0022-0023], teaches element 130 being configured to receive color image data which may be represented as pixels in an array. [0018] further teaches imaging element 112…).
15. Regarding claim 12, claim 12 is rejected for reasons related to claim 1.
16. Regarding claim 13, claim 13 is rejected for reasons related to claim 2.
17. Regarding claim 14, claim 14 is rejected for reasons related to claim 3.
18. Regarding claim 15, claim 15 is rejected for reasons related to claim 4.
19. Regarding claim 16, claim 16 is rejected for reasons related to claim 5.
20. Regarding claim 17, claim 17 is rejected for reasons related to claim 6.
21. Regarding claim 18, claim 18 is rejected for reasons related to claim 7.
22. Regarding claim 19, claim 19 is rejected for reasons related to claim 8.
23. Regarding claim 20, Alakuijala in view of Mlinar teaches the method of claim 19 (see claim 19 above), wherein
the first color channel comprises green color channel (…Alakuijala, in [0041], with
regards to Fig. 3A, teaches an example of a color image 300, wherein a nominal image (reference image, 310(G)) is represented in the green channel…), and wherein
the second color channel comprises one or a red or blue color channel (…Alakuijala, in
[0041], with regards to Fig. 3A, teaches an example of a color image 300, wherein image 310(R) is represented in the red channel (outlined by a dashed edge)…).
Conclusion
24. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SURAFEL YILMAKASSAYE whose telephone number is (703)756-1910. The examiner can normally be reached Monday-Friday 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SURAFEL YILMAKASSAYE/Examiner, Art Unit 2639
/TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639