DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/15/2026 was filed after the mailing date of the Non-final rejection on 11/05/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
The objection to the claims has been withdrawn in light of the Applicant’s amendments.
The rejection under 35 USC 112(b) has been withdrawn in light of the Applicant’s amendments.
Applicant's arguments, in the Remarks filed on 03/20/2026, with regards to claim 1 and claim 11 have been fully considered but they are not persuasive.
In response to the Applicant argument (pages 9-10), Examiner respectfully disagrees.
Caballero discloses a system and method for enhancing a section of lower-quality visual data using an hierarchical algorithm in super-resolution algorithms based on learning (pre-training) neutral network models (Col 1 lines 19-22, Col 8 line 47 through Col 9 line 21 and Col 15 lines 54-60). The models can be developed for each scene (Col 15 lines 60-64), and the hierarchical model is used interchangeable with the hierarchical algorithm (Col 13 lines 39-43). Caballero clearly discloses a target video data is split into a sequence of images (frames) grouping into scenes having common features (Figures 16-17 and 25-26; Col 14 line 32 through Col 15 line 27, Col 20 line 64 through Col 21 line 10 and Col 33 lines 34-48); and each scene (a sequence of images) is mapped to a model/algorithm choosing from a library of a set of pre-trained models in order to enhance lower-quality visual data to high-quality of original visual data (Figures 16-17 and 25-26, Col 17 lines 32-41, Col 18 lines 60-67 and Col 33 line 62 through Col 34 line 15).
Therefore, Caballero’s teaching still meets the limitations of “dividing a target video into… a same scene; determining, for each group of images, a matched video enhancement algorithm… a pre-trained model” in the independent claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites the limitations of "the predicting the quality score" in line 2, and “the global features” in line 3. There is insufficient antecedent basis for these limitations in the claim.
Claim 5 recites the limitations of "the selecting the algorithm" in line 2, and “the quality score" in line 3. There is insufficient antecedent basis for these limitations in the claim.
Other dependent claims are rejected the same.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2 and 8-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Caballero et al (US 10701394).
Regarding claim 1, Caballero discloses a video enhancement method, comprising:
dividing a target video into a plurality of groups of images, the images in a same group belonging to a same scene (Figures 16-17, 19-20 and 25-26; Col 14 line 32 through Col 15 line 27, Col 20 line 64 through Col 21 line 10 and Col 33 lines 34-48 for splitting a target video into a sequence of images which are grouped into scenes having common features);
determining, for each group of images, a matched video enhancement algorithm in a specified set of video enhancement algorithms using a pre-trained model (Col 1 lines 19-22, Col 8 line 47 through Col 9 line 21 and Col 15 lines 54-60 for enhancing a section of lower-quality visual data using an hierarchical algorithm in super-resolution algorithms based on learning (pre-training) neutral network models; Col 15 lines 60-64 for the models can be developed for each scene; Col 13 lines 39-43 for the hierarchical model is used interchangeable with the hierarchical algorithm; and Figures 16-17 and 25-26, Col 17 lines 32-41, Col 18 lines 60-67 and Col 33 line 62 through Col 34 line 15 for each scene (a sequence of images) is mapped to a model/algorithm choosing from a library of a set of pre-trained models in order to enhance lower-quality visual data to high-quality of original visual data);
performing video enhancement processing on the each group of images using the video enhancement algorithm (Figures 16-17, 19-20 and 25-26; Col 37 lines 8-24 for performing image enhancement on frames of scene using selected image enhancement model); and
sequentially splicing video enhancement processing results of all groups of images to obtain video enhancement data of the target video (Figures 16-17, 19-20 and 25-26; Col 37 lines 48-53 and Col 38 lines 7-15 for reconstructing all image enhancing frames of scenes to obtain video enhancement data of the target video).
Regarding claim 2, Caballero discloses the method as discussed in the rejection of claim 1. Caballero further discloses wherein the determining, for each group of images, the matched video enhancement algorithm comprises:
extracting, by the model, image features from a currently input group of images using a deep residual network; generating inter-frame difference information based on the image features output by the deep residual network, performing channel fusion processing on the inter-frame difference information and the image features; extracting global features based on a result of the channel fusion processing; and determining the matched video enhancement algorithm based on a quality score corresponding the global features (Figures 7-9 and 28-30 and theirs corresponding description sections).
Regarding claim 8, Caballero discloses the method as discussed in the rejection of claim 1. Caballero further discloses wherein the dividing a target video into a plurality of groups of images comprises: identifying scenes in the target video using a scene boundary detection algorithm; and extracting, for each of the scenes, video frames from a frame sequence corresponding to the each of the scenes using a sliding window, and taking the video frames extracted each time as a group of images, wherein k frames are extracted each time, k is a specified number of frames of a group of images, and based on a number of frames remaining to be extracted in a scene being less than k, a group of images is obtained after supplementing to k frames (Figures 7-8 and 29-31).
Regarding claim 9, Caballero discloses the method as discussed in the rejection of claim 1. Caballero further discloses pre-training the model using specified sample data, wherein a method for constructing the sample data comprises:
performing, for each group of sample images, video enhancement processing on the each group of sample images using each algorithm in a specified set of video enhancement algorithms respectively; and assessing a quality score of a video enhancement processing result of each of the video enhancement algorithms using a specified image quality assessment algorithm or a manual scoring mode, and setting an average value of the quality scores of the video enhancement algorithms as a quality score label of the each group of sample images in corresponding algorithms (Col 33 line 23 through Col 34 line 15, Col 37 lines 7-60, Col 38 lines 21-65).
Regarding claim 10, Caballero discloses the method as discussed in the rejection of claim 9. Caballero further discloses wherein a number of the image quality assessment algorithms is greater than 2, and a number of people participating in the manual scoring is greater than 2 (Col 1 lines 19-22, Col 8 lines 47-49 and Col 44 lines 51-60).
Regarding claims 11-12, all limitations of claims 11-12 are analyzed and rejected corresponding to claims 1-2.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Caballero et al (US 10701394) in view of Heo et al (US 2019/0057270).
Regarding claim 3, Caballero discloses the method as discussed in the rejection of claim 2. Caballero further discloses wherein the determining, for each group of images, the matched video enhancement algorithm comprises: selecting an algorithm from the specified set of video enhancement algorithms as a video enhancement algorithm matched with the currently input group of images according to a strategy of preferentially selecting a high algorithm based on the quality score (Col 31 lines 21-28, Col 32 lines 58-65, Col 38 lines 29-40 and Col 39 line 1 through Col 10 line 18).
Caballero is silent about predicting the quality score of each algorithm for performing video enhancement processing on the currently input group of images, based on the global features.
Heo discloses predicting the quality score of each algorithm in a specified set of video enhancement algorithms for performing video enhancement processing on the currently input group of images, based on the global features, and selecting an algorithm from the specified set of video enhancement algorithms as a video enhancement algorithm matched with the currently input group of images according to a strategy of preferentially selecting a high-score algorithm based on the quality score (¶ [0068]-[0102]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Caballero system with the teaching of Heo, so to provide an alternative way of processing data and selecting the most appropriate algorithm for enhancing video process as a matter of engineering choices.
Regarding claim 4, Caballero in view of Heo discloses the method as discussed in the rejection of claim 3. The combined system further discloses wherein the predicting the quality score of each algorithm comprises: predicting, by a multilayer perceptron (MLP) based on the global features, the quality score of each algorithm (Heo’s Figures 5 and 8).
Regarding claims 13-14, all limitations of claims 13-14 are analyzed and rejected corresponding to claims 3-4.
Allowable Subject Matter
Claims 5-7 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIGI L DUBASKY whose telephone number is (571)270-5686. The examiner can normally be reached M-F 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GIGI L DUBASKY/Primary Examiner, Art Unit 2421