Prosecution Insights
Last updated: April 19, 2026
Application No. 18/662,253

SYSTEMS, METHODS, DEVICES AND APPARATUSES FOR DETECTING FACIAL EXPRESSION

Non-Final OA §102§103
Filed
May 13, 2024
Examiner
THOMAS, MIA M
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Mindmaze Group SA
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
606 granted / 703 resolved
+24.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
12 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
43.0%
+3.0% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 703 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application discloses and claims only subject matter disclosed in prior application number, 18/317,058, filed 05/13/2023, and names the inventor or at least one joint inventor named in the prior application. Accordingly, this application has been examined as a continuation. Response to Preliminary Amendment This Office Action is responsive to communications filed on 05/13/2024. Claims 1-20 remain pending in the instant application. Claims 21-28 were canceled. Claim 1 is the sole independent claim. An Office Action on the merits follows here below. Specification The abstract of the disclosure is objected to because the patent abstract should be concise statement that does not recite legal phraseology and should not merely recite an instant claim. Correction is required. See MPEP § 608.01[b]. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C. 102 (a)(1) and/or (a)(2) as being anticipated by Aimone (US 20210200313 A1). Regarding Claim 1: Aimone discloses an avatar rendering system for rendering a facial expression of a user (Refer to para [054]; “the method comprises measuring, using input of electrodes on the user's face or forehead, muscle activity associated with a facial expression of emotion; combining the user's brainwaves with bio-signal information about the facial expression; and producing a change in state of the user's avatar in said virtual or mixed environment based at least in part on the combined user's brainwaves and bio-signal information.”) comprising: an apparatus (Refer to para [216]; “With reference to FIG. 40, the bio-signal sensor 3500 can be included on an apparatus 4000, for example on a support portion 4002 such as strap 111 of wearable computing device 100.”) comprising a plurality of EMG (electromyography) electrodes configured for contact with a face of said user (Refer to para [176]; “In some embodiments, the conductive threads are used in electrodes measuring impedance-tolerant bio-signals, such as EMG and EOG bio-signals, or where an operational amplifier is placed near the electrode, such as within one millimeter. In some embodiments, wiring of bio-sensors include conductive threads providing electrical conductivity between electrode regions and other electrical components. ”) and a computational device (Refer to para [149]; “In particular, the one or more computing devices may maintain or have access to one or more databases maintaining bio-signal processing data, instructions, algorithms, associations, or any other information which may be used or leveraged in the processing of the bio-signal measurements obtained by the wearable computing device.”) configured with instructions operating thereon to cause the computational device to: process a plurality of EMG signals received from said EMG electrodes to form processed EMG signals (Refer to para [171]; “In some embodiments, the circumaural pad includes ear-adjacent bio-signal sensors. In some embodiments, the ear-mounted portion includes in-ear electrodes. In-ear electrodes provide a similar signal to scalp electrodes, but may have increased signal-to-noise ratios as there may be less interference from EMG signals. In some embodiments, the at least one ear-mounted portion is detachable from the wearable computing device. In some embodiments, the ear-mounted portion includes a connector for establishing a wired connection that complements a receiver on a securement strap portion of the wearable computing device.”) classify a facial expression according to said processed EMG using a classifier (Refer to para [202 and 203]; “Having reference to FIGS. 23 and 24, in some embodiments, the face pad 120 includes pressure and/or strain sensors to measure face movement. The sensors augment other sensors, such as facial EMG, to determine the facial expression the user is exhibiting. In some embodiments, the pressure and/or strain sensors are in the form of segmented face cushions 190. Facial movement 191 causes differential pressure and compression of the segmented face cushions 190. Piezoelectric or printed strain sensors 192 on the surface of cushion 190 for measuring strain. Facial bio-signal sensors such as electrodes 130 or sensors 192 may further yield facial expression information (which may be difficult to obtain using cameras in a VR headset). Muscles specifically around the eyes play an important role in conveying emotional state. Smiles, for example, if accompanied by engagement of the muscles at the corners of the eyes are interpreted as true smiles, in contrast to those that are put on voluntarily. EOG signals provide information about eye movements. Basic gaze direction and dynamic movement can be estimated in real-time and can thus be used as a substitute for optical methods of eye tracking in many applications.”) blend a classified facial expression with a basic avatar shape to form a blended avatar (Refer to para [258]; “In some embodiments, electrodes on the face or forehead may measure muscle activity associated with facial expression of emotions (for example: frown, surprise, puzzlement, sadness, happiness) in which the user's brainwaves are combined with bio-signal information about emotional facial expression to produce a change in state of a user's avatar in said VR environment.”) and render said blended avatar (Refer to para [260 and 261]; “For example, a system designed to use brain response information within a VR environment, which determines a user's likelihood of loss of engagement or boredom, and adapts the environment continuously to maximize engagement. In some embodiments, a profile of the user including the user's brain response and engagement may be determined within the user's first few minutes within a VR environment, and the environment is adapted to a threshold of interactivity to maintain engagement without continuously monitoring the user's brain response.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 6-8, 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Aimone (made of record) in combination with Tzvieli et al. (US 20160360970 A1). Regarding Claim 6: Aimone discloses all the claimed elements as rejected above. Aimone does not expressly disclose classifying EMG signals using at least one of (1) a discriminant analysis classifier; (2) a Riemannian geometry classifier; (3) Naive Bayes classifier, (4) a k-nearest neighbor classifier, (5) a RBF (radial basis function) classifier, (6) a Bagging classifier, (7) a SVM (support vector machine) classifier, (8) a node classifier (NC), (9) NCS (neural classifier system), (10) SCRLDA (Shrunken Centroid Regularized Linear Discriminate and Analysis), or (11) a Random Forest classifier. Tzvieli teaches “systems for analyzing facial cues based on temperature measurements receive series of thermal images composed of pixels that represent temperature (T) measurements. Measuring the temperature is required in order to run a tracker and perform image registration, which compensate for the movements of the user in relation to the thermal camera and brings the images into precise alignment for analysis and comparison.” More specifically, Tzvieli teaches a classifier [that] classifies said processed EMG signals of the user using at least one of (1) a discriminant analysis classifier; (2) a Riemannian geometry classifier; (3) Naive Bayes classifier, (4) a k-nearest neighbor classifier, (5) a RBF (radial basis function) classifier, (6) a Bagging classifier, (7) a SVM (support vector machine) classifier, (8) a node classifier (NC), (9) NCS (neural classifier system), (10) SCRLDA (Shrunken Centroid Regularized Linear Discriminate and Analysis), or (11) a Random Forest classifier. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Aimone by adding a processor for classifying facial expressions as taught by Tzvieli. The suggestion/motivation for combining the teachings of Aimone and Tzvieli would have been in order to more effectively “… analyze facial cues based on temperature measurements receive series of thermal images composed of pixels that represent temperature (T) measurements. Measuring the temperature is required in order to run a tracker and perform image registration, which compensate for the movements of the user in relation to the thermal camera and brings the images into precise alignment for analysis and comparison.” (at para [028], Tzvieli). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Aimone and Tzvieli in order to obtain the specified claimed elements of Claim 6. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding Claim 7: Tzvieli teaches said discriminant analysis classifier is one of (1) LDA (linear discriminant analysis), (2) QDA (quadratic discriminant analysis), or (3) sQDA (Refer to para [283]; “In some embodiments, holistic methods developed for whole face applications can be used for portions of faces and/or oriented images of portions of faces too. One example of such an approach involves the feature extraction techniques used for Eigenfaces, which uses Principal Component Analysis (PCA). Another example of such an approach are the feature extraction techniques used for Fisherfaces, which are built on Linear Discriminant Analysis (LDA).”). Regarding Claim 8: Tzvieli teaches said classifier is one of (1) Riemannian geometry, (2) QDA and (3) sQDA (Refer to para [338]; “Wang et al. describe a recognition technique for microexpressions that is based on Discriminant Tensor Subspace Analysis (DTSA) and Extreme Learning Machine (ELM). 2D face images are first dimensionally reduced using DTSA to generate discriminant features, then the reduced features are fed into the ELM classifier to analytically learn an optimal model for recognition.”). Regarding Claim 11: Tzvieli teaches a computational device to receive data associated with at least one facial expression of the user before classifying the facial expression as a neutral expression or a non-neutral expression (Refer to para [047 and 363]; “The normally expected exhale streams are determined according to a normal human who breathes normally, when having a relaxed neutral face, and when the neck, jaw, and facial muscles are not stretched nor contracted.”). Regarding Claim 12: Tzvieli teaches at least one facial expression is a neutral expression (Refer to para [363]; “Emotional responses, such as labels returned by an emotional response predictor, may be represented by various types of values in embodiments described herein. In one embodiment, emotions are represented using discrete categories. For example, the categories may include three emotional states: negatively excited, positively excited, and neutral.”). Regarding Claim 13: Tzvieli teaches at least one facial expression is a non-neutral expression (Refer to para [363]; “Emotional responses, such as labels returned by an emotional response predictor, may be represented by various types of values in embodiments described herein. In one embodiment, emotions are represented using discrete categories. For example, the categories may include three emotional states: negatively excited, positively excited, and neutral.”). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Aimone (made of record) in combination with Heck et al. (US 20170060256 A1). Regarding Claim 16: Aimone discloses all the claimed elements as rejected above. Aimone does not expressly disclose unipolar electrodes. Heck teaches a system and method for controlling an electronic device via a facial gesture controller comprising unipolar electrodes (Refer to para [040]; “The electrodes 14 may be arrayed along the length of the body 12 in any number or location. For example, some embodiments may utilize 4-12 electrodes of unipolar or differential configuration spread along the length of the body 12, e.g., so as to be aligned across the forehead, down the side of the face, under the eyes, and/or over the nose of a user.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Aimone by adding a configuration of electrodes that are unipolar as rejected above by Heck. The suggestion/motivation for combining the teachings of Aimone and Heck would have been in order to more efficiently “measure the electrical activity of muscles during rest, slight contraction and forceful contraction. As such, EMG is usually associated as being a function of time described in terms of strength (amplitude) and frequency.”(at para [006], Heck). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Aimone and Heck in order to obtain the specified claimed elements of Claim 16. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question. Allowable Subject Matter Claims 2-5, 9, 10, 14, 15, 17-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art either singly or in combination does not teach, disclose or suggest at least the following claim limitation(s): “…a classifier training system for training said classifier, said training system configured to receive a plurality of sets of processed EMG signals from a plurality of training users, wherein: each set including a plurality of groups of processed EMG signals from each training user, and each group of processed EMG signals corresponding to a classified facial expression of said training user; said training system additionally configured to: determine a pattern of variance for each of said groups of processed EMG signals across said plurality of training users corresponding to each classified facial expression, and compare said processed EMG signals of the user to said patterns of variance to adjust said classification of the facial expression of the user.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US-10515474 B2 US-11105696 B2 US-11000669 B2 US-10943100 B2 US-10521014 B2 Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIA M THOMAS whose telephone number is (571)270-1583. The examiner can normally be reached M-Th 8:30am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen (Steve) Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MIA M. THOMAS Primary Examiner Art Unit 2665 /MIA M THOMAS/Primary Examiner Art Unit 2665
Read full office action

Prosecution Timeline

May 13, 2024
Application Filed
Apr 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602938
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597154
IMAGE ANALYSIS METHOD AND CAMERA APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12590529
BOREHOLE IMAGE INTERPRETATION AND ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12586220
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12579220
Visual Attribute Expansion via Multiple Machine Learning Models
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+15.7%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 703 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month