Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 10-2020-0073247, filed on June 16, 2020.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/27/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 5/13/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 7/15/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 10/03/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 5/6/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21-23, 28-40 are rejected under 35 U.S.C. 103 as being unpatentable over Nishimura et al. (US 20190043209 A1) referred to as Nishimura hereinafter and further in view of Sheikh et al. (US 20170310901 A1) referred to as Sheikh hereinafter.
Regarding claim 21, Nishimura teaches An image processing system comprising: (“automatic tuning of image signal processors” Nishimura, abstract) an image sensor; (“camera(s) 243 “ Nishimura, para. [0045])
a tuning module; (“reference-based tuning mechanism 110” Nishimura, para. [0031])
a memory configured to store a plurality of reference images; (“wherein the history database to store the tuning parameters and historical information including previous tuning settings, metadata associated with the tuning parameters, tuning ranges associated with the tuning parameters, and importance of the tuning parameters, wherein the raw database to maintain general information about the ISP, the tuning parameters, and the images.” Nishimura, para. [0138])
and processing circuitry, wherein the processing circuitry is configured to, receive a plurality of captured images from the image sensor, (“as facilitated by detection and observation logic 201, one or more of camera(s) 242 may be used to capture images or videos of a geographic location (whether that be indoors or outdoors) and its associated contents (e.g., furniture, electronic devices, humans, animals, trees, mountains, etc.) and form a set of images or a video stream.” Nishimura, para. [0045])
and perform the first image processing operation based on the set first parameter, and wherein the tuning module is configured to, generate the correlation information by comparing the first corrected image and the first reference image, and send the correlation information to the processing circuitry. (“In generating reference images, as facilitated by reference detection and generation logic 213, a reference image is different for each functionality block associated with an ISP/ISP pipeline. For example, nonlinear optimization, as facilitated by optimization logic 203, may be used to find an optimal set of tuning parameters that are then used to achieve output images with the best possible fitness measure against a corresponding reference image.” Nishimura, para. [0053])
However, Nishimura does not teach generate a first corrected image by performing a first image processing operation among a plurality of image processing operations on a first captured image based on a first default parameter, set a first parameter of the first image processing operation based on a correlation information between the first corrected image and a first reference image,
Sheikh teaches generate a first corrected image by performing a first image processing operation among a plurality of image processing operations on a first captured image based on a first default parameter, set a first parameter of the first image processing operation based on a correlation information between the first corrected image and a first reference image, (“The camera application reconfigures the camera firmware 202 to generate “tuning” parameters 224 that are more suitable for image texture retention. The custom “tuning” parameters 224 include parameters for defect pixel correction, noise filtering, color filter array interpolation (demosaicing), sharpness enhancement, and so forth. These custom tuning parameters 224 retain image information but may produce undesirable image effects in a single image 228.” Sheikh, para. [0038]), (“The video stabilization module 206 can use the first image as a reference frame to detect the points in the second image of the scene that are slightly displaced compared to the position of the point in the first image reference frame.” Sheikh, para. [0043]), and (“The PM align block 505 applies a brightness and color correction algorithm to the blend control parameters 446 to generate adjusted blend control parameters 515. As described above, the video stream 300 includes changes in exposure, focus, and white balance because the 3A lock has an off status the during video capture. In applying the brightness and color correction algorithm, the PM align block 505 analyzes brightness differences and color differences between frames in an overlap region among the N frames of the interpolated sliding window 458, adjusts the brightness or color of N−1 frames in the sliding window to match as closely as possible to the reference frame, and rejects from blending each frame having a brightness/color difference that is too large, as indicated by a determination that the frame has a brightness difference that exceeds a predetermined threshold or has a color difference that exceeds a predetermined threshold by the scene analyzer 444.” Sheikh, para. [0080])
Nishimura and Sheikh are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Nishimura in light of Sheikh’s first corrected image. One would have been motivated to do so because it would help improve sharpness in the overall output composite frame. (Sheikh, para. [0083])
Regarding claim 22, Nishimura teaches wherein the image sensor is configured to obtain the first captured image by capturing the first reference image among the plurality of reference images. (“receive images associated with one or more scenes captured by one or more cameras” Nishimura, para. [0136])
Regarding claim 23, Nishimura teaches wherein the first image processing operation is associated with the first captured image among the plurality of image processing operations. (“As illustrated in FIG. 3C, reference images 347 are generated for any spatial noise reduction (e.g., Luma, Chroma, Bayer, etc.) by capturing static scenes from input images 341, such as using a tripod, to ensure there is no global motion.” Nishimura, para. [0097])
Regarding claim 28, Nishimura teaches wherein the plurality of image processing operations comprise at least one of a sensor correction operation, a lens distortion correction operation, a color correction operation, or an image quality improvement operation. (“camera module characterization may include performing camera module-specific calibration of ISP, such as black level, lens shading table, linearization, color correction calibration, etc., as facilitated by characterization and calculation logic 207.” Nishimura, para. [0067])
Regarding claim 29, Nishimura teaches wherein the sensor correction operation includes a defect pixel correction operation and an offset correction operation. (“It is contemplated that other logic blocks may be added to reference-based tuning mechanism 110 of FIG. 2, such as a defect pixel correction (DPC) block, which removes hot and cold pixels to produce defect-free images, including raw images in some embodiments, a temporal noise reduction block for temporally fuse scenes, and one or more of linearization, disparity correction, and color correction blocks for handling linearization, disparity, and color correction, respectively, by calibration of camera module, and further, a high dynamic range (HDR) tone mapping block for handling tone and contrast dealing with ground truth and/or reference algorithms that can use compute resources for producing best-quality images.” Nishimura, para. [0095])
Regarding claim 30, Nishimura teaches wherein the processing circuitry is configured to: sequentially generate a plurality of corrected images by performing an associated image processing operation on each of the plurality of captured images in an order of the plurality of image processing operations, and sequentially set parameters of the plurality of image processing operations based on a plurality of pieces of correlation information between the plurality of corrected images and the plurality of reference images in the order. (“an ISP may be optimized by providing ideal reference images for each functionality of the ISP for facilitating automatic ISP tuning by intermediate reference, as shown in FIG. 4A. Further, to enable this novel technique, in one embodiment, a pre-pipe and a post-pipe of an ISP may be constructed as shown with reference to FIG. 4B. It is contemplated and to be noted that terms like “ISP” and “ISP pipeline” are used interchangeably throughout this document.” Nishimura, para. [0077]), figs. 4A and 4B.
Regarding claim 31, Nishimura teaches image processing operations such as color and distortion correction.
Although Nishimura does not explicitly teach “wherein, in the order, the sensor correction operation precedes at least one of the lens distortion correction operation, the color correction operation and the image quality improvement operation.”. Nonetheless, it would have been understood in the art that to perform lens distortion correction ahead of other post imaging processing operations because lens distortion correction helps correct image distortion before an image is captured.
Regarding claim 32, Nishimura teaches wherein the plurality of reference images are at least one of a white image, a black image, a grey image, a rectangular image, a color image, a resolution image, or a combination of at least two thereof. (“it is contemplated that each project can have several ISP pipelines to tune for different use cases and resolution configurations.” Nishimura, para. [0047])
Nishimura teaches about image processing in digital cameras (Nishimura, para. [0019]). All digital images are resolution images, and since Nishimura teaches spatial noise reduction (Nishimura, para. [0009]) and Bayer noise reduction (para. [0055]), therefore, such denoise operation must be performed on the resolution images.
Regarding claim 33, Nishimura teaches wherein the white image is associated with the defect pixel correction operation, the black image is associated with at least one of the defect pixel correction operation or the offset correction operation, the grey image is associated with a gamma correction operation, the rectangular image is associated with the lens distortion correction operation, the color image is associated with at least one of a demosaic operation, a color gain correction operation, or a color correction matrix operation, and the resolution image is associated with a denoise operation.
Examiner’s note: Nishimura teaches about image processing in digital cameras (Nishimura, para. [0019]). All digital images are resolution images, and since Nishimura teaches spatial noise reduction (Nishimura, para. [0009]) and Bayer noise reduction (para. [0055]), therefore, such denoise operation must be performed on the resolution images.
Regarding claim 34, Nishimura teaches An image processing system comprising: (“automatic tuning of image signal processors” Nishimura, abstract)
a tuning module; (“reference-based tuning mechanism 110” Nishimura, para. [0031])
a memory configured to store a plurality of reference images; (“wherein the history database to store the tuning parameters and historical information including previous tuning settings, metadata associated with the tuning parameters, tuning ranges associated with the tuning parameters, and importance of the tuning parameters, wherein the raw database to maintain general information about the ISP, the tuning parameters, and the images.” Nishimura, para. [0138])
and processing circuitry, where in the processing circuitry is configured to, perform a plurality of image processing operations, and perform a tuning operation of setting a parameter of a target image processing operation among the plurality of image processing operations, wherein the processing circuitry, (“as facilitated by detection and observation logic 201, one or more of camera(s) 242 may be used to capture images or videos of a geographic location (whether that be indoors or outdoors) and its associated contents (e.g., furniture, electronic devices, humans, animals, trees, mountains, etc.) and form a set of images or a video stream.” Nishimura, para. [0045]) and (“reference images 327 may be used to automatically tune ISP 311 or its tuning parameters so that its output image 325 is as close to one of reference images 327.” Nishimura, para. [0090])
for performing the tuning operation, is configured to, obtain a captured image by capturing a target reference image, from among the plurality of reference images, associated with the target image processing operation, (“receive images associated with one or more scenes captured by one or more cameras; access tuning parameters associated with functionalities within an image signal processor (ISP) pipeline; generate reference images based on the tuning parameters, wherein a reference image is associated with an image for each functionality within the ISP pipeline;” Nishimura, para. [0136])
update the parameter of the target image processing operation based on parameter information, and perform the target image processing operation based on the updated parameter, and wherein the tuning module is configured to, “reference images” are the claimed target images and a (“method to generate reference images based on the tuning parameters”. (Nishimura, abstract) generating a corrected image by performing the target image processing operation on the captured image based on a parameter of the target image processing operation; Nishimura teaches about (“automatically tune the ISP pipeline based on selection of one or more of the reference images for one or more of the images for one or more of the functionalities.” Nishimura, abstract) Nishimura teaches about updating tuning parameters in Fig. 4D.
calculate correlations between each of the plurality of corrected images and the target reference image, determine a specific corrected image closest to the target reference image based on the correlations, and send parameter information including a default parameter of the specific corrected image. (“In generating reference images, as facilitated by reference detection and generation logic 213, a reference image is different for each functionality block associated with an ISP/ISP pipeline. For example, nonlinear optimization, as facilitated by optimization logic 203, may be used to find an optimal set of tuning parameters that are then used to achieve output images with the best possible fitness measure against a corresponding reference image.” Nishimura, para. [0053])
However, Nishimura does not teach generate a plurality of corrected images by performing the target image processing operation on the captured image based on each of a plurality of default parameters of the target image processing operation,
Sheikh teaches generate a plurality of corrected images by performing the target image processing operation on the captured image based on each of a plurality of default parameters of the target image processing operation, (“The camera application reconfigures the camera firmware 202 to generate “tuning” parameters 224 that are more suitable for image texture retention. The custom “tuning” parameters 224 include parameters for defect pixel correction, noise filtering, color filter array interpolation (demosaicing), sharpness enhancement, and so forth. These custom tuning parameters 224 retain image information but may produce undesirable image effects in a single image 228.” Sheikh, para. [0038]), (“The video stabilization module 206 can use the first image as a reference frame to detect the points in the second image of the scene that are slightly displaced compared to the position of the point in the first image reference frame.” Sheikh, para. [0043]), and (“The PM align block 505 applies a brightness and color correction algorithm to the blend control parameters 446 to generate adjusted blend control parameters 515. As described above, the video stream 300 includes changes in exposure, focus, and white balance because the 3A lock has an off status the during video capture. In applying the brightness and color correction algorithm, the PM align block 505 analyzes brightness differences and color differences between frames in an overlap region among the N frames of the interpolated sliding window 458, adjusts the brightness or color of N−1 frames in the sliding window to match as closely as possible to the reference frame, and rejects from blending each frame having a brightness/color difference that is too large, as indicated by a determination that the frame has a brightness difference that exceeds a predetermined threshold or has a color difference that exceeds a predetermined threshold by the scene analyzer 444.” Sheikh, para. [0080])
Nishimura and Sheikh are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Nishimura in light of Sheikh’s corrected images. One would have been motivated to do so because it would help improve sharpness in the overall output composite frame. (Sheikh, para. [0083])
Regarding claim 35, refer to the explanation of claim 29.
Regarding claim 36, refer to the explanation of claim 30.
Regarding claim 37, Nishimura teaches wherein the processing circuitry is configured to generate the plurality of corrected images by performing the target image processing operation after performing an image processing operation on the captured image prior to the target image processing operation in the order. (“an ISP may be optimized by providing ideal reference images for each functionality of the ISP for facilitating automatic ISP tuning by intermediate reference, as shown in FIG. 4A. Further, to enable this novel technique, in one embodiment, a pre-pipe and a post-pipe of an ISP may be constructed as shown with reference to FIG. 4B. It is contemplated and to be noted that terms like “ISP” and “ISP pipeline” are used interchangeably throughout this document.” Nishimura, para. [0077]), figs. 4A and 4B.
Regarding claim 38, Sheikh teaches wherein the processing circuitry is configured to control an image sensor to output the captured image by capturing the target reference image of the plurality of reference images. (“The ISP 210 receives each raw image 222 captured by the sensor 208 and receives parameters 224 to control the quality of the images output from the ISP 210. Based on the parameters 224, the ISP 210 leaves the images in raw format or improves the quality of each raw image 222” Sheikh, para. [0041])
Regarding claim 39, refer to the explanation of claims 1 and 34.
Regarding claim 40, refer to the explanation of claims 32 and 29.
Claim(s) 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Nishimura and Sheikh as mentioned above and further in view of Finnila et al. (US 9219847 B2) referred to as Finnila hereinafter.
Regarding claim 24, Nishimura does not teach wherein the processing circuitry is configured to generate a plurality of first corrected images by performing the first image processing operation a plurality of times on the first captured image based on first default parameters having different values.
However, Finnila teaches wherein the processing circuitry is configured to generate a plurality of first corrected images by performing the first image processing operation a plurality of times on the first captured image based on first default parameters having different values. (“The common dynamic camera configuration (CDCC) file 320 structure may be divided into different parameter sets with a unique identification (ID) and version number. The identification (ID) links the content of the parameter set into the logical image signal processors (ISP) section. The parameter sets may comprise for example noise filter, black level correction and lens shading correction. New parameters may be added to the parameter sets but may not be taken away. In this way, backward compatibility may be provided and newer common dynamic camera configuration (CDCC) files 325 may be used in older platforms 330, 340.” Finnila, col. 6, lines 22-32) and (“An image signal processor (ISP) 111,116 may use different algorithms in different phases to process raw image data. Each algorithm utilizes certain parts of the camera characterization data as parameters in order to achieve, partly subjectively and partly objectively, the best possible image quality (IQ).” Finnila, col. 3, lines 29-34)
Nishimura, Sheikh, and Finnila are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Nishimura and Sheikh in light of Finnila’s generating corrected images based on default parameters. One would have been motivated to do so because it may decrease the development time as taught in Finnila (col. 6, line 51)
Regarding claim 25, Nishimura teaches wherein the tuning module is configured to: generate correlation information between the plurality of first corrected images and the first reference image; determine a specific first corrected image closest to the first reference image based on the correlation information; (“In generating reference images, as facilitated by reference detection and generation logic 213, a reference image is different for each functionality block associated with an ISP/ISP pipeline. For example, nonlinear optimization, as facilitated by optimization logic 203, may be used to find an optimal set of tuning parameters that are then used to achieve output images with the best possible fitness measure against a corresponding reference image.” Nishimura, para. [0053])
However, Nishimura does not teach and set a first default parameter of the specific first corrected image as the first parameter.
Finnila teaches and set a first default parameter of the specific first corrected image as the first parameter. (“The image signal processor (ISP) 240 may tune the current image 260 using the image signal processor (ISP) 240 specific tuning values created by the auto-tuning block 220 to generate a tuned image 270.” Finnila, col. 5, lines 46-49)
Regarding claim 26, Sheikh teaches wherein the processing circuitry is configured to: generate a second corrected image by performing a second image processing operation among the plurality of image processing operations on a second captured image when the first parameter is set; (“the feature detection and tracking module 426 applies tracking to the first frame 302 (IMG0) with reference forward to the third frame 306 (IMG2), thereby causing the frame feature buffer 428 to store a calculation of the tracking relationship T02. The feature detection and tracking module 426 would apply tracking to the second frame 304 (IMG1) with reference forward to the third frame 306 (IMG2)” Sheikh, para. [0064])
However, the combination of Nishimura and Sheikh does not teach and set a second parameter of the second image processing operation based on a correlation information, received from the tuning module, between the second corrected image and a second reference image, and the tuning module is configured to generate the correlation information by comparing the second corrected image and the second reference image.
Finnila teaches and set a second parameter of the second image processing operation based on a correlation information, received from the tuning module, between the second corrected image and a second reference image, and the tuning module is configured to generate the correlation information by comparing the second corrected image and the second reference image. (“In known camera systems 110, the tuning parameters 112 to 115 are image signal processor (ISP) specific and created with image signal processor (ISP) specific tools.” Finnila, col. 4, lines 5-7) and (“Tuned images 360, 370 generated by the image signal processors (ISP) 330, 340 supporting the common dynamic camera configuration (CDCC) may be compared to an image 380 of a new platform 350, where the implementation is under development. The new platform 350 may use a new version of common dynamic camera configuration (CDCC) file 325. The new version of common dynamic camera configuration (CDCC) file 325 may be generated using the earlier common dynamic camera configuration (CDCC) file 320 as a starting point.” Finnila, col. 6, lines 7-16)
Regarding claim 27, Finnila teaches wherein the processing circuitry is configured to generate the second corrected image by being configured to perform the second image processing operation after being configured to perform the first image processing operation on the second captured image based on the set first parameter. (“select a set of second digital camera module characterization parameters from a second common dynamic camera configuration file using the second image statistics; convert the selected set of second digital camera module characterization parameters to second image signal processing parameters; and tune the second image data using the second image signal processing parameters for providing a second tuned image. (Finnila, (col. 2, lines 1-10): the “tuned images” are the claimed corrected images)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PARDIS SOHRABY/ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664