DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed (i.e., a descriptive title that distinguishes the invention and is not a generic or general description). The new title should take into account any amendments to the claims to best indicate the claimed invention.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 6/20/23 and 1/30/24 follows the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 9 recites the limitation "the reference input feature map". There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2 and 11-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Choi et al. (EP 4040378).
Regarding claim 1, Choi teaches a processor-implemented method (abstract, ¶ 0040, ¶¶ 0078-0087), the method comprising:
obtaining a plurality of image frames acquired for a scene within a predetermined time (receive burst image set 101; ¶¶ 0040-0045, Fig. 1);
determining loss values respectively corresponding to the plurality of image frames (determine quality of each individual image; ¶ 0052, Fig. 2);
determining a reference frame among the plurality of image frames based on the loss values (select anchor image among plurality of images based on image quality; ¶ 0052); and
generating a final image of the scene based on the reference frame (generating a restored image based on anchor information; ¶ 0056, Fig. 5).
Regarding claim 2, Choi teaches the method of claim 1, wherein the determining of the reference frame comprises determining an image frame having a minimum of the loss values among the plurality of image frames to be the reference frame (select anchor image among plurality of images based on image quality; ¶ 0052).
Regarding claim 11, Choi teaches the method of claim 1, wherein the determining of the reference frame comprises either one or both of: determining, as a plurality of reference frames, a predetermined number of the plurality of image frames having minimum loss values among the loss values; and determining, as the plurality of reference frames, image frames of the plurality of image frames having loss values less than or equal to a threshold among the loss values (select anchor image among plurality of images based on image quality; ¶ 0052).
Regarding claim 12, Choi teaches a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform the method of claim 1 (¶¶ 0078-0087).
Claims 13 and 14 recite similar limitations as claims 1 and 2 thus, arguments similar to that presented above for claims 1 and 2 are equally applicable to claims 13 and 14.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3-5, 9, 10 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Choi in view of Ren et al. "Best frame selection in a short video." In Proceedings of the IEEE/CVF Winter Conference on applications of computer vision, pp. 3212-3221. 2020.
Regarding claim 3, Choi teaches the method of claim 1, but does not explicitly teach wherein the determining of the loss values comprises determining an intermediate feature map and an output feature map corresponding to an image frame among the plurality of image frames by inputting the image frame and an output feature map corresponding to a previous image frame among the plurality of image frames into a first neural network.
However, Ren teaches wherein the determining of the loss values comprises determining an intermediate feature map and an output feature map corresponding to an image frame among the plurality of image frames by inputting the image frame and an output feature map corresponding to a previous image frame among the plurality of image frames into a first neural network (incorporate face information into Siamese CNN by using face heatmap; pages 3216-3217).
Choi and Ren are in the same field of endeavor of a method and system for selecting a best frame out of a set of frames. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the system of Choi to include a feature map as taught by Ren. The combination improves the system by providing an automated means for selecting the best frame.
Regarding claim 4, Choi in view of Ren teach the method of claim 3, but Choi does not explicitly teach wherein the determining of the loss values comprises determining a loss value corresponding to the image frame by inputting the intermediate feature map into a second neural network.
However, Ren teaches wherein the determining of the loss values comprises determining a loss value corresponding to the image frame by inputting the intermediate feature map into a second neural network (face heatmap CNN; pages 3216-3217).
The motivation applied in claim 3 is incorporated herein.
Regarding claim 5, Choi in view of Ren teach the method of claim 4, but Choi does not explicitly teach wherein the determining of the reference frame comprises comparing the loss value corresponding to the image frame with a reference loss value determined prior to the determining of the loss value corresponding to the image frame.
However, Ren teaches wherein the determining of the reference frame comprises comparing the loss value corresponding to the image frame with a reference loss value determined prior to the determining of the loss value corresponding to the image frame (PR loss; page 3216).
The motivation applied in claim 3 is incorporated herein.
Regarding claim 9, Choi in view of Ren teach the method of claim 3, wherein the generating of the final image comprises generating the final image by inputting the reference frame and the reference input feature map into the first neural network (generate restored image; ¶ 0077, Fig. 12, Choi).
Regarding claim 10, Choi teaches the method of claim 1, but does not explicitly teach wherein the determining of the loss value comprises: generating an output feature map by inputting an image frame among the plurality of image frames and an output feature map corresponding to a previous image frame among the plurality of image frames into a first neural network; and determining a loss value corresponding to the image frame by inputting the image frame into a second neural network.
However, Ren teaches wherein the determining of the loss value comprises: generating an output feature map by inputting an image frame among the plurality of image frames and an output feature map corresponding to a previous image frame among the plurality of image frames into a first neural network (incorporate face heatmap into network; pages 3216-3217); and determining a loss value corresponding to the image frame by inputting the image frame into a second neural network (Siamese CNN; pages 3215-3216).
The motivation applied in claim 3 is incorporated herein.
Regarding claim 15, Choi teaches a processor-implemented method (abstract), the method comprising:
inputting an image frame into an image restoration neural network comprising a plurality of layers and comprising a recursive structure (image restoration; ¶¶ 0040-0051, ¶¶ 0073-0076); but does not explicitly teach inputting an intermediate feature map output from an intermediate layer of the image restoration neural network into a loss prediction neural network; and determining, from the loss prediction neural network, a loss value indicating a difference between an image restored by the image restoration neural network and a ground truth image.
However, Ren teaches inputting an intermediate feature map output from an intermediate layer of the image restoration neural network into a loss prediction neural network (input face heatmaps into CNN: page 3217); and determining, from the loss prediction neural network, a loss value indicating a difference between an image restored by the image restoration neural network and a ground truth image (PR loss; page 3216).
The motivation applied in claim 3 is incorporated herein.
Regarding claim 16, Choi in view of Ren teach the method of claim 15, further comprising: obtaining a plurality of image frames, wherein loss values respectively corresponding to the image frames are determined using the loss prediction neural network (determine quality of each individual image; ¶ 0052, Choi).
Regarding claim 17, Choi in view of Ren teach the method of claim 16, wherein an image frame among the plurality of image frames having a minimum value among the loss values is determined to be a reference frame (select anchor image among plurality of images based on image quality; ¶ 0052, Choi), and
an image output from the image restoration neural network, by inputting the reference frame into the image restoration neural network again, is determined to be a restored final image (generating a restored image based on anchor information; ¶ 0056, Fig. 5, Choi).
Regarding claim 18, Choi in view of Ren teach the method of claim 17, wherein the image restoration neural network is configured to output the restored final image using an output feature map corresponding to an image frame preceding the reference frame (¶ 0047 and ¶¶ 0074-0077, Fig. 12, Choi).
Regarding claim 19, Choi in view of Ren teach the method of claim 18, wherein the output feature map is a feature map output from an intermediate layer of the image restoration neural network receiving the image frame preceding the reference frame (¶ 0068, Fig. 11, Choi).
Regarding claim 20, Choi in view of Ren teach the method of claim 19, wherein the intermediate layer of the image restoration neural network that outputs the output feature map is a layer subsequent to the intermediate layer that outputs the intermediate feature map input into the loss prediction neural network (¶¶ 0045-0051, Choi).
Allowable Subject Matter
Claims 6-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zivkovic (US 2019/0045142) teaches selecting a key frame for burst image processing.
Wang et al. (US 2018/0005077) teaches receiving a machine-learned predictive best of burst model is automatically generated from features extracted from training burst sets of images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENT YIP whose telephone number is (571)270-5244. The examiner can normally be reached 9:00-5:00 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at (571) 270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENT YIP/Primary Examiner, Art Unit 2681