Prosecution Insights
Last updated: April 19, 2026
Application No. 18/535,420

EMOTION PREDICTION METHOD BASED ON VIRTUAL FACIAL EXPRESSION IMAGE AUGMENTATION

Non-Final OA §102§103§112
Filed
Dec 11, 2023
Examiner
ALAVI, AMIR
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Korea Electronics Technology Institute
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
1083 granted / 1156 resolved
+31.7% vs TC avg
Minimal +4% lift
Without
With
+3.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
23 currently pending
Career history
1179
Total Applications
across all art units

Statute-Specific Performance

§101
23.0%
-17.0% vs TC avg
§103
20.2%
-19.8% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1156 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: (recognition unit; prediction unit) in claim 10. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 4 and 9-11 are rejected under 35 U.S.C. 102(a)(1) as being (a)(1) by Ranjeet et al. (IN 202111012711 A), hereinafter, “Ranjeet”. Regarding claim 1, Ranjeet recites, a step of acquiring a user facial image (Please note, Abstract of the invention. As indicated a face detection process has been performed where from each input image; a face region is detected based on its landmark points); a step of extracting a facial expression feature from the acquired user facial image (Please note, Abstract of the invention. As indicated each detected face undergoes for analyzing facial textures using deep convolutional neural network approaches such that each face can be represented in a feature vector form in the second part of the implementation); and a step of predicting a user emotion from the extracted facial expression feature, wherein the step of extracting comprises extracting the facial expression feature by using a facial expression recognition network, (Please note, Abstract of the invention. As indicated the computed feature vectors undergo classification tasks such that the system can predict the type of expression on the human facial image) the facial expression recognition network being an artificial intelligence (AI) model that is trained to receive a user facial image and to extract a facial expression feature (Please note, Abstract of the invention. As indicated each detected face undergoes for analyzing facial textures using deep convolutional neural network approaches such that each face can be represented in a feature vector form in the second part of the implementation), wherein the facial expression recognition network is retrained with virtual facial images which are augmented from a facial image that causes a failure in emotion recognition (Please note, Abstract of the invention. As indicated the performance of the proposed system has been enhanced by employing several trade-off factors such as data augmentation, progressive image-resizing, fine-tuning, and transfer learning.). Regarding claim 4, Ranjeet recites, generating augmented facial images by using a generation network, the generation network receiving a facial expression feature of a facial image that causes a failure to facial expression recognition, and generating and outputting a virtual facial image. (Please note, page 4, second paragraph. As indicated image Augmentation In machine learning, the image augmentation technique has been employed to increase the amount of data by applying some transformation methods on the existing data. The benefits of image augmentation are (i) handling the overtraining situation of the convolution neural networks, (ii) reducing the over-fitting problems, and (iii) increase the performance of CNN by fine-tuning the hyper-parameters. The image augmentation techniques generate several samples for each image without changing the visual quality and fidelity of the images.). Regarding claim 9, Ranjeet recites, wherein a failure in recognition of a facial expression is grasped through feedback or a response of a user to a service that is provided based on a result of emotion prediction. (Please note, claim 1. As indicated the computed feature vectors undergo classification tasks such that the system can predict the type of expression on the human facial image. the performance of the proposed system has been enhanced by employing several trade-off factors such as data augmentation, progressive image-resizing, fine-tuning and transfer learning.). Regarding claims 10-11, similar analysis as those presented for claim 1, are applicable. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3 and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Ranjeet et al. (IN 202111012711 A), hereinafter, “Ranjeet”, in view of Huang et al. (USPN 10,504,268), hereinafter, “Huang”. Regarding claim 2 Ranjeet recites, a step of acquiring a user facial image (Please note, Abstract of the invention. As indicated a face detection process has been performed where from each input image; a face region is detected based on its landmark points); a step of extracting a facial expression feature from the acquired user facial image (Please note, Abstract of the invention. As indicated each detected face undergoes for analyzing facial textures using deep convolutional neural network approaches such that each face can be represented in a feature vector form in the second part of the implementation); and a step of predicting a user emotion from the extracted facial expression feature, wherein the step of extracting comprises extracting the facial expression feature by using a facial expression recognition network, (Please note, Abstract of the invention. As indicated the computed feature vectors undergo classification tasks such that the system can predict the type of expression on the human facial image) the facial expression recognition network being an artificial intelligence (AI) model that is trained to receive a user facial image and to extract a facial expression feature (Please note, Abstract of the invention. As indicated each detected face undergoes for analyzing facial textures using deep convolutional neural network approaches such that each face can be represented in a feature vector form in the second part of the implementation), wherein the facial expression recognition network is retrained with virtual facial images which are augmented from a facial image that causes a failure in emotion recognition (Please note, Abstract of the invention. As indicated the performance of the proposed system has been enhanced by employing several trade-off factors such as data augmentation, progressive image-resizing, fine-tuning, and transfer learning.). Ranjeet does not expressly teach, extracting a face style feature from the facial image; extracting a facial expression feature from the facial image; and fusing the extracted face style feature and the extracted facial expression feature. Huang teaches, extracting a face style feature from the facial image; (Please note, column 1, line 55. As indicated to generate an expressive facial sketch image); extracting a facial expression feature from the facial image (Please note, column 1, lines 52-53. As indicated an expression feature extractor configured to process the image data of the user to generate a plurality of facial expression descriptor vectors); and fusing the extracted face style feature and the extracted facial expression feature (Please note, column 1, lines 57-58. As indicated an image generation module configured to use a second conditional DC-GAN model to generate a facial expression image from the expressive facial sketch image). Ranjeet & Huang are combinable because they are from the same field of endeavor. At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this facial extraction of Huang in Ranjeet’s invention. The suggestion/motivation for doing so would have been to produce more realistic final image. Therefore, it would have been obvious to combine Huang with Ranjeet to obtain the invention as specified in claim 2. Regarding claim 3 Ranjeet recites, generating face mesh data from the facial image (Please note, page 6, first paragraph. As indicated establishing a human face shape model and aligning calibrated human face images); and extracting the face style feature from the generated mesh data. (Please note, page 6, first paragraph. As indicated extracting expression characteristics of a human face.). Regarding claim 5, Huang teaches, a feature into which a facial expression feature and an emotion label are fused, and to generate a virtual facial image. (Please note, column 3, lines 10-11. As indicated a model may be constructed that takes into account behavior of one individual in generating a valid facial expression response in their virtual dyad partner.). Regarding claim 6, Huang teaches, wherein the generation network constitutes a discriminator configured to discriminate whether the virtual facial image generated by the generation network is a real image or a fake image, and a generative adversarial network. (Please note, column 3, lines 9-12. As indicated a model may be constructed that takes into account behavior of one individual in generating a valid facial expression response in their virtual dyad partner. To this end, FIG. 1 depicts an example system and method 10 for generating an expressive avatar image using a two level optimization of GANs in interviewer-interviewee dyadic interactions.). Regarding claim 7, Huang teaches, wherein the generation network is trained to generate a virtual facial image that degrades accuracy of discrimination of the discriminator, and wherein the discriminator is trained to enhance accuracy of discrimination on whether the virtual facial image generated by the generation network is a real image or a fake image. (Please note, column 5, lines 7-14. As indicated in the discriminator D, a real or fake (generated) sketch image is depth concatenated with c.sub.t. The combined input goes through two layers of stride-2 convolution with spatial batch normalization followed by leaky ReLU. Again two full connection layers are employed and the output is produced by a Sigmoid function. Similarly,` the facial expression feature is concatenated with features in all layers in the discriminator.). Regarding claim 8, Huang teaches, wherein the generation network is trained to generate a virtual facial image that has a similarity to a real facial image by a defined level or lower. (Please note, column 5, lines 12-14. As indicated similarly, the facial expression feature is concatenated with features in all layers in the discriminator.). Examiner’s Note The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR ALAVI/Primary Examiner, Art Unit 2668 Wednesday, February 11, 2026
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Feb 12, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597232
SYSTEM FOR LEARNING NEW VISUAL INSPECTION TASKS USING A FEW-SHOT META-LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573189
PROCESSING METHOD AND PROCESSING DEVICE USING SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12567238
GENERATING A DATA STRUCTURE FOR SPECIFYING VISUAL DATA SETS
2y 5m to grant Granted Mar 03, 2026
Patent 12561950
AI System and Method for Automatic Analog Gauge Reading
2y 5m to grant Granted Feb 24, 2026
Patent 12561774
SYSTEM AND METHOD FOR REAL-TIME TONE-MAPPING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
97%
With Interview (+3.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month