DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 23rd 2026 has been entered.
Response to Amendment
Applicant’s amendment filed February 23rd 2026 has been entered and made of record. Claims 1, 6, 10 and 11 are amended. New Claim 20 is added. Claims 1-20 are pending.
Applicant’s remarks in view of the newly presented amendments have been considered but are not found to be persuasive for at least the following reasons:
Applicant has amended the independent claims to include the newly added limitation:
identifying the low-quality person image from a plurality of images by using a discrimination artificial intelligence model trained to distinguish between a first high-quality person image and a second high-quality person image generated by an image quality enhancement artificial intelligence model from a low-quality person image;
Applicant argues that the reference to Xiang does not disclose a discrimination model. Examiner disagrees. Xiang discloses a discrimination machine learning model (Fig. 5, 572) and teaches:
Paragraph [0031]: “…In some examples, the first machine learning model may be trained based on classifying, using a discrimination machine learning model, an enhanced training image to produce a first classification. For example, the discrimination machine learning model may be a machine learning model that is trained to predict whether an input image (e.g., enhanced training image or ground truth image) is generated (e.g., enhanced) or original (e.g., not enhanced…In some examples, the discrimination machine learning model may compare an enhanced training image and a ground truth image and predict whether the enhanced training image and/or the ground truth image are enhanced.”
Xiang discloses that the discrimination model distinguishes between an enhanced image and an original ground truth image. The rejection is accordingly maintained.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3 and 10-13 and 20 are rejected under 35 U.S.C. 102(a) as being anticipated by USPN 2023/0377095 to Xiang et al.
With regard to claim 1, Xiang discloses a method of generating, by an electronic device, a high-quality person image from a low-quality person image by an artificial intelligence model, the high- quality person image having a higher image quality than the low-quality person image (paragraphs [0009]-[0013], The object of the invention is to identify a person or face and to increase the quality of the image region of the person), the method comprising:
identifying the low-quality person image from a plurality of images by using a discrimination artificial intelligence model trained to distinguish between a first high-quality person image and a second high-quality person image generated by an image quality enhancement artificial intelligence model from a low-quality person image; (paragraph [0017], a region of interest, such as a face is identified through object detection/pattern recognition. See Fig. 5, Discrimination Machine Learning Model 572, and paragraph [0031]:
“…In some examples, the first machine learning model may be trained based on classifying, using a discrimination machine learning model, an enhanced training image to produce a first classification. For example, the discrimination machine learning model may be a machine learning model that is trained to predict whether an input image (e.g., enhanced training image or ground truth image) is generated (e.g., enhanced) or original (e.g., not enhanced…In some examples, the discrimination machine learning model may compare an enhanced training image and a ground truth image and predict whether the enhanced training image and/or the ground truth image are enhanced.”
Xiang discloses that the discrimination model distinguishes between an enhanced image and an original ground truth image.);
applying the low-quality person image as input to the artificial intelligence model (paragraph [0019], and Fig. 1, step 104, The identified face is input into a machine learning model that enhances that face image); and
obtaining, as output from the artificial intelligence model, the high-quality person image (paragraph [0019], and Fig. 1 step 104 and step 108, The facial image region is input into a machine learning model that enhances the face and outputs an enhanced version of the image with the enhanced face),
wherein the artificial intelligence model is configured to:
recognize a first face by performing face identification and face recognition on the low- quality person image, using a face recognition artificial intelligence model (paragraphs [0017] and [0020]-[0025], the facial recognition is performed by a pattern recognition using a machine learning model),
obtain a face feature of the first face based on a result of the face identification and the face recognition (paragraphs [0019]-[0025] and [0028], The machine learning model is trained based on identifying facial features and feature vectors),
input (i) the face feature and (ii) the low-quality person image into an image quality enhancement artificial intelligence model to obtain the high-quality person image by performing image processing for modifying an area corresponding to the face feature (paragraphs [0019]-[0025], [0028] and [0049]-[0050], The identified face is input into the enhancement machine learning model along with the identified facial features and feature vectors and the enhancement model uses the facial feature information in the enhancement processing of increasing the quality of the image. See Fig. 2, top branch), and
output the high-quality person image with the modified area corresponding to the face feature (See Fig. 2, Enhanced image 220. The enhanced facial image area is output in the enhanced output image. Specific areas are determined to be enhanced using the facial feature trained enhancement model. See also paragraph [0087] where specific facial details are enhanced by the enhancement model).
With regard to claim 2, Xiang discloses the method of claim 1, wherein the artificial intelligence model is further configured to:
update the image quality enhancement artificial intelligence model by learning, as training data, a plurality of high-quality person images and a plurality of low-quality person images respectively converted from the plurality of high-quality person images (paragraphs [0019]-[0025], The enhancement models are trained on training images, by identifying the facial features in the enhanced image and then comparing the located feature with the original low-quality/resolution known location of the feature based on the ground truth image), and
wherein the high-quality person image is obtained using the updated image quality enhancement artificial intelligence model (paragraphs [0019]-[0025], the enhancement model is updated as it is trained to minimize loss and the updated/trained model is used to enhance the facial image).
With regard to claim 3, Xiang discloses the method of claim 2, wherein the plurality of low-quality person images are respectively converted from the plurality of high-quality person images by applying image degradation to each of the plurality of high-quality person images, the training data being applied during learning as a plurality of pairs of low-quality person images and respective high-quality person images (paragraph [0062], image pairs are generated from an original high resolution image with some degradation, downsampling, compression, etc.).
With regard to claim 10, the discussion of claim 1 applies. Xiang discloses the method of claim 1 is performed by a computer program (Figs. 3 and 4, and paragraphs [0015] and [0053]).
With regard to claim 11, the discussions of claims 1 and 10 apply. Xiang discloses a computer for performing the method of claim 1 (Figs. 3 and 4, and paragraphs [0015] and [0053]).
With regard to claims 12-13, the discussions of claims 2-3 apply.
With regard to claim 20, Xiang discloses the method of claim 1, wherein the artificial intelligence model is further configured to train the image quality enhancement artificial intelligence model to modify the face feature (paragraphs [0020]-[0024] and [0087], Facial features are modified in the form of landmark adjustment as the system is trained according to determined losses), and
wherein the face feature is at least one of: an outline of the face, a shape of the face, a size of the face, or a position of a landmark of the face (paragraphs [0020]-[0024] and [0087], Facial features are modified in the form of landmark adjustment as the system is trained according to determined losses. Facial details or feature examples are also enhanced such as a beard which is relative to both shape and outline of the face I addition to landmark position).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4-9 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPN 2023/0377095 to Xiang et al. and Applicant cited translation (IDS filed 9/5/24). of Chinese Patent CN 110 532 871 to Huawei Tech, hereinafter referred to as Huawei.
With regard to claim 4, Xiang discloses the method of claim 1, and discloses that the machine learning model is trained based on a facial identity (paragraphs [0025]-[0027], Identity features of a person are used and the model is trained according to the recognized facial characteristic features of a person). However Xiang does not teach the specifics claimed in claim 4 of a plurality of persons classified by person.
Huawei discloses s similar system for a machine learning model for identifying faces from low resolution and enhancing the images to a higher resolution similar to the system of Xiang and Huawei further discloses wherein the artificial intelligence model is further configured to:
update the face recognition artificial intelligence model and the image quality enhancement artificial intelligence model by performing personalized learning, based on a plurality of person images classified by person (paragraph [0011], The neural network is used to train the model and improve the model). See also paragraphs [0188], The neural network is trained for specific people),
wherein the recognizing the first face further comprises identifying a first person corresponding to the first face using the updated face recognition artificial intelligence model (paragraphs [0232] - [0236]: "Face feature extraction can be performed by a face feature extraction algorithm. Face feature extraction algorithms include recognition algorithms based on facial feature points, recognition algorithms based on the entire face image, and recognition algorithms based on templates.”; "In step S904, feature matching is performed. Load the stored facial feature vector group from the local storage, and match the facial feature vector during the call with the facial feature vector group. If the facial feature vector group includes a vector, the similarity between the vector and the facial feature vector during the call is within a preset range, for example, the distance is less than 1, the matching is considered successful, and step S905 is performed.” In other words, a super-resolution model is trained using images of a specific person and the trained model is linked to the feature vector of the face of the specific person. When a face is detected in an image during a call the features of the detected face are compared to the feature vectors of the trained models and a matching feature vector is determined. The latter is in fact a face recognition process, as the feature vector of a model is the feature vector of the corresponding face), and
wherein the high-quality person image is obtained using the image quality enhancement artificial intelligence model updated with respect to the first person (paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
It would have been obvious to one of ordinary skill in the art before time of filing to combine the identity facial image learning model of Xiang with the Identity facial learning model of Huawei in order to recognize the identity of a person based on multiple facial images of the identified person.
With regard to claim 5, Huawei discloses wherein the artificial intelligence model is further configured to:
lighten the face recognition artificial intelligence model and the image quality enhancement artificial intelligence model updated by the personalized learning (paragraphs [0152]-[0153], the Image Signal Processor or ISP is used to optimize the image by adjusting exposure, color temperature and brightness which are all considered ways to lighten the image),
wherein the identifying the first person is performed using the lightened face recognition artificial intelligence model paragraphs [0232] - [0236]: "Face feature extraction can be performed by a face feature extraction algorithm. Face feature extraction algorithms include recognition algorithms based on facial feature points, recognition algorithms based on the entire face image, and recognition algorithms based on templates.”; "In step S904, feature matching is performed. Load the stored facial feature vector group from the local storage, and match the facial feature vector during the call with the facial feature vector group. If the facial feature vector group includes a vector, the similarity between the vector and the facial feature vector during the call is within a preset range, for example, the distance is less than 1, the matching is considered successful, and step S905 is performed.” In other words, a super-resolution model is trained using images of a specific person and the trained model is linked to the feature vector of the face of the specific person. When a face is detected in an image during a call the features of the detected face are compared to the feature vectors of the trained models and a matching feature vector is determined. The latter is in fact a face recognition process, as the feature vector of a model is the feature vector of the corresponding face), and
wherein the high-quality person image is obtained using the lightened image quality enhancement artificial intelligence model (paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
It would have been obvious to one of ordinary skill in the art before time of filing to use the system of Huawei to lighten the facial images of Xiang in order to better perform facial recognition.
With regard to claim 6, Huawei discloses wherein the artificial intelligence model is further configured to:
obtain the face feature of the first person by learning a plurality of first person images about the first person as training data (paragraph [0008], the face is identified using facial features), and
wherein the identifying the first person is based on the face feature of the first person (paragraphs [0232] - [0236]: "Face feature extraction can be performed by a face feature extraction algorithm. Face feature extraction algorithms include recognition algorithms based on facial feature points, recognition algorithms based on the entire face image, and recognition algorithms based on templates.”; "In step S904, feature matching is performed. Load the stored facial feature vector group from the local storage, and match the facial feature vector during the call with the facial feature vector group. If the facial feature vector group includes a vector, the similarity between the vector and the facial feature vector during the call is within a preset range, for example, the distance is less than 1, the matching is considered successful, and step S905 is performed.” In other words, a super-resolution model is trained using images of a specific person and the trained model is linked to the feature vector of the face of the specific person. When a face is detected in an image during a call the features of the detected face are compared to the feature vectors of the trained models and a matching feature vector is determined. The latter is in fact a face recognition process, as the feature vector of a model is the feature vector of the corresponding face. See also paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
Both Xiang and Huawei teach identifying the face based on identified facial features, therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the facial features in order to train the face recognition models in enhancing the low quality images into high quality images.
With regard to claim 7, Huawei discloses further comprising:
receiving, from a user, an input for selecting an image quality enhancement area of the first person (paragraphs [0072], [0102] and [0107], Huawei disclose that user manual input is enabled for selecting image and that the facial region is the most important region of the image for processing); and
applying information about the image quality enhancement area selected by the user as input to the artificial intelligence model (paragraphs [0072]-[0073]), wherein the artificial intelligence model is further configured to:
update the image quality enhancement artificial intelligence model by additionally learning training data about the image quality enhancement area of the first person (paragraphs [0232] - [0236]: "Face feature extraction can be performed by a face feature extraction algorithm. Face feature extraction algorithms include recognition algorithms based on facial feature points, recognition algorithms based on the entire face image, and recognition algorithms based on templates.”; "In step S904, feature matching is performed. Load the stored facial feature vector group from the local storage, and match the facial feature vector during the call with the facial feature vector group. If the facial feature vector group includes a vector, the similarity between the vector and the facial feature vector during the call is within a preset range, for example, the distance is less than 1, the matching is considered successful, and step S905 is performed.” In other words, a super-resolution model is trained using images of a specific person and the trained model is linked to the feature vector of the face of the specific person. When a face is detected in an image during a call the features of the detected face are compared to the feature vectors of the trained models and a matching feature vector is determined. The latter is in fact a face recognition process, as the feature vector of a model is the feature vector of the corresponding face), and
wherein the high-quality person image is obtained using the updated image quality enhancement artificial intelligence model and comprises an enhanced image quality of an area corresponding to the image quality enhancement area selected by the user (paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
Both Xiang and Huawei teach identifying the face based on identified facial features, therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the facial features in order to train the face recognition models in enhancing the low quality images into high quality images.
With regard to claim 8, Huawei discloses further comprising:
receiving, from a user, an input for selecting an image quality enhancement direction for the first person (paragraphs [0072], [0102] and [0107], Huawei disclose that user manual input is enabled for selecting image and that the facial region is the most important region of the image for processing); and
applying information about the image quality enhancement direction selected by the user to the artificial intelligence model (paragraph [0011], The neural network is used to train the model and improve the model). See also paragraphs [0188], The neural network is trained for specific people), wherein the artificial intelligence model is further configured to:
update the image quality enhancement artificial intelligence model by additionally learning training data about the image quality enhancement direction (paragraphs [0232] - [0236]: "Face feature extraction can be performed by a face feature extraction algorithm. Face feature extraction algorithms include recognition algorithms based on facial feature points, recognition algorithms based on the entire face image, and recognition algorithms based on templates.”; "In step S904, feature matching is performed. Load the stored facial feature vector group from the local storage, and match the facial feature vector during the call with the facial feature vector group. If the facial feature vector group includes a vector, the similarity between the vector and the facial feature vector during the call is within a preset range, for example, the distance is less than 1, the matching is considered successful, and step S905 is performed.” In other words, a super-resolution model is trained using images of a specific person and the trained model is linked to the feature vector of the face of the specific person. When a face is detected in an image during a call the features of the detected face are compared to the feature vectors of the trained models and a matching feature vector is determined. The latter is in fact a face recognition process, as the feature vector of a model is the feature vector of the corresponding face), and
wherein the modifying the area corresponding to the face feature is performed according to the image quality enhancement direction (paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
Both Xiang and Huawei teach identifying the face based on identified facial features, therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the facial features in order to train the face recognition models in enhancing the low quality images into high quality images.
With regard to claim 9, Huawei discloses the method of claim 8, further comprising:
receiving, from the user, an input for designating a second person (paragraph [0008], [0011] and [0187], Each face is identified using facial features and for each specific person a super-resolution model can be established);
obtaining data about the second person (paragraph [0008], [0011] and [0187], Each face is identified using facial features and for each specific person a super-resolution model can be established); and
applying the data about the second person as training data to the artificial intelligence model, wherein the artificial intelligence model is further configured to:
obtain a face feature of the second person, corresponding to the face feature of the first person, from the data about the second person, and
wherein the modifying the area corresponding to the face feature of the first person is further based on the face feature of the second person (paragraph [0237]: “According to the one-to-one correspondence between the vectors in the facial feature vector group and the super-resolution model, the super- resolution model corresponding to the vector is determined. Use this super-resolution model to process the IF to obtain a high-resolution face image.”).
Both Xiang and Huawei teach identifying the face based on identified facial features. Therefore, it would have been obvious to one of ordinary skill in the art before time of filing to use the facial features in order to train the face recognition models in enhancing the low quality images into high quality images.
With regard to claims 14-19, the discussions of claims 4-9 apply respectively.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESLEY J TUCKER/Primary Examiner, Art Unit 2661