Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication in response to application filed 05/13/2024
Information Disclosure Statement
2. The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-10 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over BOE (CN109255390.A)
Regarding claims 1 and 12, BOE teaches an image quality evaluation method and device, comprising (see abstract):
acquiring a input image (see abstract); and
inputting the input image to obtain an image quality evaluation result (see abstract),
wherein the image quality evaluation network is configured to extract a first image feature from the input image (reads on obtain the first feature image of training image, see abstract), perform a shift operation on the first image feature to acquire one or more second image features (reads on multiple image translation carried pout to first feature image, see abstract); and
wherein a size of each of the second image features is the same as a size of the first image feature (reads on row, column number of pixel in first feature image ... are corresponding identical, see abstract), and regions with identical values of features exist in different positions between the first image feature and the second image features (reads on shifting pixels while maintain corresponding identical pixel values and the correlation between the first feature image and each displacement image, see abstract).
For the claimed feature of “determine the image quality evaluation result by combining the first image feature and the acquired second image features” as recited in claim 1, BOE does not specifically teach “image quality evaluation”, however BOE teaches supplying the first image and multiple displacement images to a network discriminator for image related evaluation and distinction (see abstract).
Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to apply BOE’s disclosed feature-image translation technique within an image quality evaluation network to determine an evaluation result.
The claimed “processor” and “memory” as recited in claim 12, reads on BOE’s processor and memory as disclosed in BOE’s abstract.
Claim 2 recites “wherein determining the image quality evaluation result by combining the first image feature and the acquired second image features comprises:
determining one or more image attribute evaluation results by combining the first image feature and the acquired second image features; and
determining the image quality evaluation result according to the determined image attribute evaluation results”. Although BOE does not use the phrase “image attribute evaluation result”, however it explicitly discloses extracting and using multiple feature representations derived from a first feature image, preforming evaluation related processing based on those representations and producing an output result from the network based on the combined feature information (see abstract). Therefore, BOE teaches intermediate evaluation outputs derived from combined feature information.
Claim 3 recites “ wherein the image quality evaluation network comprises N cascaded predetermined modules, wherein N > 2; for an i* predetermined module, 1 the predetermined modules are configured to perform a shift operation on an input image feature to acquire one or more third image features, and further extract image features by combining the input image feature and the acquired third image features and output the image features, wherein a size of each of the third image features is the same as a size of the input image feature, and regions with identical values of features exist in different positions between the input image feature and the third image features;
performing the shift operation on the first image feature to acquire the one or more second image features, and determining the image quality evaluation result by combining the first image feature and the acquired second image features comprises:
inputting the first image feature into a first predetermined module in the N cascaded predetermined modules to obtain an image feature output from an N“ predetermined module; and
determining the image quality evaluation result according to the image feature output from the N* predetermined module”. BOE features disclosed and rejected as addressed in claim 1. BOE further teaches obtaining a first feature image from an input image and performing multiple image translations on the first feature image to obtain multiple displacement images, wherein pixel row and column numbers correspond identically between the first feature image and each displacement image. Such a translations constitute a shift operation on an input image feature to acquire additional image features having the same size as the input image feature, with regions having identical values existing at different positions due to the translation (see abstract). BOE further teaches supplying multiple feature images derived from the first feature image to a network for further feature extraction and evaluation, which reasonably corresponds to cascaded processing of feature images, where outputs of earlier processing stages are used as inputs to subsequent stages. Thus, it would have been obvious to implement BOE’s disclosed feature image translation and repeated feature processing using N cascaded predetermined modules, where N>2, in order to progressively extract image features and determine an image quality evaluation result from an output of a later modules, as a predictable application of BOE’s disclosed processing framework to improve evaluation accuracy.
Claim 4 recites “wherein performing the shift operation on the first image feature to acquire the one or more second image features comprises:
performing a shift operation on features in a predetermined region in the first image feature to obtain one or more shift results; and
filling a predetermined value for a default part in each of the one or more shift results to acquire the one or more second image features each having the same size as the first image feature”. BOE discloses image translation on a first feature image and shifting pixels based on row and column correspondence to obtain displacement images having the same size as the first feature image (see abstract). It would have been obvious that performing such translation on features in a predetermined region and filling a predetermined value for parts of the translated result to maintain image size is a predictable implementation of BOE’s disclosed translation operation.
Claim 5 recites “wherein determining the image quality evaluation result by combining the first image feature and the acquired second image features comprises:
further extracting space feature information of the input image by combining the first image feature and the acquired second image features, and
determining the image quality evaluation result according to the extracted space feature information”. BOE teaches combining a first feature image and multiple displacement images and using the combined feature information to generated an evaluation related output (see abstract). Further extracting information based on the positional relationships of the combined feature images and determining an image quality evaluation result according to that information would have been obvious within the BOE’s disclosed processing.
Regarding claim 6, BOE teaches wherein the first image feature comprises at least one of: an image detail feature (see abstract), an image noise feature, or an image global feature.
Claim 7 recites “wherein extracting the first image feature from the input image comprises:
acquiring predetermined information from the input image, and further extracting one or more image features from the predetermined information; and
adding the extracted image features to the first image feature;
wherein the predetermined information includes at least one of: an original image, image detail information, image noise information, image luminance information, image saturation information, or image hue information”. BOE teaches acquiring an input image and extracting a first feature image therefrom, which comprises acquiring predetermined information from the input image and extracting one or more image features from the predetermined information and adding the extracted image features to the first image feature (see abstract).
Claim 8 recites “wherein acquiring the predetermined information from the input image comprises:
performing edge filtering on the input image to extract the image detail information; and/or
performing guided filtering on the input image to obtain a denoised blurred image, and extracting the image noise information by combining the input image and the blurred image”. BOE teaches preprocessing an input image to extract image feature information for generating a feature image supplied to a network. Performing filtering operations on the input image to extract image detail information (see abstract).
Claim 9 recites “wherein further extracting one or more image feature from the predetermined information comprises:
extracting one or more image features from different predetermined information by using different feature extraction networks”. BOE teaches extracting feature information from an input image and supplying such information for network processing (see abstract). Extracting image features from different predetermined information using different feature extraction networks would have been obvious implementation choice to process different image information, representing a predictable variation of BOE’s disclosed feature extraction.
Claim 10 recites “wherein a training set of the image quality evaluation network is generated by:
acquiring an unlabeled image sample;
for the unlabeled image sample, calculating one or more corresponding image attribute values; and
determining a corresponding image quality evaluation label according to the calculation result”. BOE teaches processing image samples to extract feature information and generate evaluation related outputs (see abstract). Acquiring an unlabeled image sample, calculating attribute values therefrom, and determining a corresponding image quality evaluation label according to the calculation result would have been an obvious manner of generating a training set using BOE’s disclosed image processing and evaluation technique.
Regarding claim 13, BOE teaches a non-transitory computer readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the method according to claim 1 (see computer readable storage as disclose in the abstract).
Conclusion
4. BOE TECHNOLOGY (CN109255390A) discloses translation of input images, not shifting a feature map extracted from image, nor generating a second image feature having identical values in different spatial positions (see [0057]-[0111]).
Prior art BEIJNG KINSOFT CLOUD NETWORK TECHNOLOGY (CN112950581) teaches extracting object features, feeding features into IQA model and performing image quality evaluation, but do not disclose performing a shift operation on the first image feature or generating a second image feature of identical feature values relocated to a different positions (see [0028]-[0095]).
Prior art GUO et al. (Pub.No.: 2023/0306719 A1) teaches rearranging spatial blocks and modifies structural relationship. Guo actually teaches generating a second feature map by segmenting the first feature map into blocks and recombining the blocks based on relative spatial location (see [0008]-[0009]), not shifting operation to produce identical values in different positions.
Prior art CHO et al. (Pub.No.: 2022/0366588 A1) teaches generating a second feature map through convolution, which is necessarily to alters feature values (see abstract, [0159]-[0160] and [0167]-[0174]), however it does not constitute a shift operation preserving identical values.
5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RASHA S AL AUBAIDI/ Primary Examiner, Art Unit 2693