DETAILED ACTION
Claims 1-17 are pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 10, 11, 16, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kobayashi (US 2020/0221028 A1).
As per claim 1, Kobayashi discloses an image processing device (fig. 1, digital camera 1) comprising a flicker correction unit that performs a flicker correction process (fig. 1, digital camera 1, signal processing unit 12, flicker processing unit 13), wherein
the flicker correction unit (fig. 1, signal processing unit 12, flicker processing unit 13) includes:
a flicker correction target object detection unit that detects, from an image, a flicker correction target object that is a subject that is likely to cause a flicker (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, flicker noise detection unit 130, para 0024 and 0025); and
an image correction unit that performs a flicker correction process on an image region of the flicker correction target object detected by the flicker correction target object detection unit (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, flicker noise correction unit 133, para 0024), and
the flicker correction target object detection unit performs a flicker correction target object detection process, using a learning model (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, flicker noise detection unit 130, object recognition may be considered a “learning model”, para 0024).
As per claim 3, Kobayashi further discloses the image processing device according to claim 1, wherein
the flicker correction target object detection unit receives an input of one image frame forming a moving image, and detects the flicker correction target object from the input image frame (para 0043 and para 0046), and
the image correction unit receives an input of consecutive image frames including the image frame from which the flicker correction target object detection unit has detected the flicker correction target object, and performs a flicker correction process (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, flicker noise correction unit 133, para 0049).
As per claim 10, Kobayashi further discloses the image processing device according to claim 1, wherein
the image correction unit performs a flicker correction process using only an image region of the flicker correction target object detected by the flicker correction target object detection unit, as a correction target region (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, para 0024 and 0025).
As per claim 11, Kobayashi further discloses the image processing device according to claim 1, wherein
the flicker correction target object detection unit (figs. 1 and 7, signal processing unit 12, flicker processing unit 13)
outputs coordinate information indicating an image region of the detected flicker correction target object to the image correction unit, and
the image correction unit identifies the image region of the flicker correction target object on a basis of the coordinate information input from the flicker correction target object detection unit, and performs a flicker correction process in which only the image region of the flicker correction target object is set as a correction target region (figs. 1 and 7, signal processing unit 12, flicker processing unit 13, discloses ability to track region, (i.e. coordinate information) and correct flicker, para 0024, 0025, and 0058).
As per claim 16, Kobayashi further discloses an image processing method implemented in an image processing device, wherein
the image processing device includes a flicker correction unit that performs a flicker correction process,
the flicker correction unit performs
a flicker correction target object detection process to detect, from an image, a flicker correction target object that is a subject that is likely to cause a flicker, and
an image correction process to perform a flicker correction process on an image region of the flicker correction target object detected in the flicker correction target object detection process, and
a flicker correction target object detection process using a learning model is performed in the flicker correction target object detection process (claim limitations have been discussed and rejected, see claim 1 above).
As per claim 17, Kobayashi further discloses a program for causing an image processing device to perform image processing, wherein
the image processing device includes a flicker correction unit that performs a flicker correction process,
the program causes the flicker correction unit to perform
a flicker correction target object detection process to detect, from an image, a flicker correction target object that is a subject that is likely to cause a flicker, and
an image correction process to perform a flicker correction process on an image region of the flicker correction target object detected in the flicker correction target object detection process, and
a flicker correction target object detection process using a learning model is performed in the flicker correction target object detection process (claim limitations have been discussed and rejected, see claim 1 above, also see para 0010 and 0077 regarding program(s) for causing a device to execute instructions).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi (US 2020/0221028 A1) in view of Aoki et al (WO 2020/045686 A1). *For purposes regarding translation, (US 2021/0321054 A1) will be used*
As per claim 2, the image processing device according to claim 1, wherein
the learning model to be used by the flicker correction target object detection unit includes
a learning model generated by a learning process that uses a large number of images including the flicker correction target object, and a learning data set including identification data of the flicker correction target object included in each of the images.
Kobayashi fails to teach the limitations as recited above in claim 2. However, Aoki discloses an imaging apparatus having a recognition processing unit 12 using a deep neural network (DNN) wherein multiple images are captured to create a learning model which is used for flicker correction purposes (Aoki, figs. 1, 13A and 13B, imaging apparatus 1, recognition processing unit 12, para 0155, 0209-0211).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Kobayashi in view of Aoki, as a whole, by incorporating the learning model(s) as taught by Aoki, into the digital camera as taught by Kobayashi, because doing so would provide a more efficient way of learning captured image data using neural networks, thus improve image capturing.
Allowable Subject Matter
Claims 4-9 and 12-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 4, none of the prior art cited alone or in combination provides the motivation to teach the following claimed limitations, with emphasis that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record, the image processing device according to claim 1, wherein the learning model to be used by the flicker correction target object detection unit includes a learning model generated by a learning process that uses consecutive image frames of the moving image including the flicker correction target object, and a learning data set including identification data of the flicker correction target object included in each of the image frames.
Regarding claims 5-9, claims depend from claim 4, and are allowable for the same reasons stated above.
Regarding claim 12, none of the prior art cited alone or in combination provides the motivation to teach the following claimed limitations, with emphasis that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record, the image processing device according to claim 1, wherein the image correction unit performs an image correction process that is one of the following (a) to (c), (a) an image correction process in which a moving average of luminance among latest several frames is calculated so that the luminance of an image region of the flicker correction target object becomes constant, and the moving average is set as the luminance of the image region of the flicker correction target object in each image frame, (b) an image correction process in which an image frame with a predetermined luminance among the latest several frames is selected so that the luminance of the image region of the flicker correction target object becomes constant, and the predetermined luminance is set as the luminance of the image region of the flicker correction target object in each image frame, and (c) an image correction process in which the image region of the flicker correction target object is replaced with an image prepared in advance.
Regarding claim 13, none of the prior art cited alone or in combination provides the motivation to teach the following claimed limitations, with emphasis that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record, the image processing device according to claim 1, further comprising a determination unit that receives an input of a plurality of consecutive image frames constituting a moving image, and determines whether or not a luminance level of the flicker correction target object included in the plurality of consecutive image frames is not higher than a prescribed threshold.
Regarding claims 14 and 15, claims depend from claim 13, and is allowable for the same reasons stated above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN H MOREHEAD III whose telephone number is (571)270-3845. The examiner can normally be reached M - F 0930-1800 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at (571) 272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN H MOREHEAD III/Examiner, Art Unit 2639
/TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639