Prosecution Insights
Last updated: April 19, 2026
Application No. 18/148,804

SYSTEM AND METHOD FOR FACILITATING MENTAL HEALTH ASSESSMENT AND ENHANCING MENTAL HEALTH VIA FACIAL RECOGNITION

Final Rejection §103
Filed
Dec 30, 2022
Examiner
NGUYEN, HIEP VAN
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Beme Health Inc.
OA Round
3 (Final)
55%
Grant Probability
Moderate
4-5
OA Rounds
4y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
564 granted / 1025 resolved
+3.0% vs TC avg
Strong +29% interview lift
Without
With
+29.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
47 currently pending
Career history
1072
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1025 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claim(s) Claims 1-20 have been examined. Claims 14, 20 have been amended. Response to Arguments Applicant’s arguments, see Remark, filed 12/01/2025, with respect to the rejection(s) of claim(s) 1, 14, and 20 under 35USC103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made over Flickinger in view of Spenciner and Kosloski. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Flickinger et al. (US20210401338A1) in view of Spenciner et al. (US20200381117A1 hereinafter Spenciner) and further in view of Kosloski et al. (US. 20140067730A1 hereinafter Kosloski) With respect to claim 1, Flickinger teaches a system, comprising: a memory that stores instructions; and a processor that executes the instructions to configure the processor to: receive, via a device, content associated with at least one physical attribute, at least one expression, or a combination thereof, of a user, wherein the content associated with the at least one physical attribute, the at least one expression, or a combination thereof, is obtained via at least one sensor (‘338; Abstract: by disclosure, Flickinger describes estimating emotional states, moods, affects of an individual and providing feedback to the individual or others are disclosed. Systems and methods that provide real time detection and monitoring of physical aspects of an individual and/or aspects of the individual's activity and means of estimating that person's emotional state or affect and change to those are also disclosed. Real time feedback to the individual about the person's emotional change, change or potential change is provided to the user, helping the user cope, adjust or appropriately act on their emotions; Para 0043: .); receive, from the user, at least one self-assessed emotional state currently being experienced by the user (‘338; Para 0015: he estimation of an emotional state or changes to an emotional state are performed using a processor which relies on one or more methods of estimation or algorithms consisting of one or more of: rules based engine; database or lookup table; self learning adaptive system; neural network or artificial intelligence. In some embodiments, a person, by means of an software interface, may modify the methods of estimation or algorithm's inputs, weightings or baselines or other parameters in order to more finely tune said methods and algorithms and may provide other feedback including subjective feedback of said person's own emotional state in order to improve the accuracy of said estimations or improve an adaptive learning or artificial intelligence system used for estimating emotional states): Estimation of emotions experienced by a person, the intensity of such emotions and whether that emotion is increasing or decreasing may be accomplished by reference to a pre-populated database (e.g., a look-up table) that maps datums/signals, and their absolute and/or relative values to certain emotional states or meta-states. Alternatively a self-learning or adaptive algorithm with or without used input and feedback and fine tuning may be used to estimate emotional states or changes based on the measured signals/datums ) extract, by utilizing at least one artificial intelligence model, at least one feature from the content (‘338; Para 0015: he estimation of an emotional state or changes to an emotional state are performed using a processor which relies on one or more methods of estimation or algorithms consisting of one or more of: rules based engine; database or lookup table; self learning adaptive system; neural network or artificial intelligence. In some embodiments, a person, by means of an software interface, may modify the methods of estimation or algorithm's inputs, weightings or baselines or other parameters in order to more finely tune said methods and algorithms and may provide other feedback including subjective feedback of said person's own emotional state in order to improve the accuracy of said estimations or improve an adaptive learning or artificial intelligence system used for estimating emotional states): Estimation of emotions experienced by a person, the intensity of such emotions and whether that emotion is increasing or decreasing may be accomplished by reference to a pre-populated database (e.g., a look-up table) that maps datums/signals, and their absolute and/or relative values to certain emotional states or meta-states. Alternatively a self-learning or adaptive algorithm with or without used input and feedback and fine tuning may be used to estimate emotional states or changes based on the measured signals/datums ); Spenciner discloses determine, based on the content and by utilizing at least one artificial intelligence model, at least one predicted emotional state of the user, wherein the at least one predicted emotional state of the user is determined based on comparing the at least one feature extracted from the content to training information utilized to train the at least one artificial intelligence model (‘117; Para 0045: As shown in FIG. 2A, the trained emotional state prediction models 150 that were generated as a result of performing the supervised machine learning process, can receive observation data and process the inputs to output an emotional state associated with a medical professional for whom the observation data was collected. For example, the trained emotional state prediction models 150, that were produced in the supervised or unsupervised machine learning process, can be subsequently be included in an artificial intelligence system or an application configured to receive observation data; Para 0082: ); selecting, based on the determining, that least one predicted emotional state or the at least one self-assessed emotional state as the at least one actual emotional state (‘730; Para 0013:(‘117; Para 0042: As shown in FIG. 2A, the model training system 130 includes a feature selector 135. The feature selector 135 operates in the supervised or unsupervised machine learning process to receive the observation data and to select a subset of features from the inputs which will be provided as training inputs to a machine learning algorithm) identify, by utilizing the at least one artificial intelligence model, content to deliver to the user to enhance or maintain the at least one self-assessed emotional state, the at least one predicted emotional state, or a combination thereof (‘117; Para 0082: The improved GUI can also provide enhanced visualizations for responding to alerts or notifications for anomalous emotional states or failed conformance with adapted clinical procedures) ; and provide, to the device, access to the content to facilitate enhancement or maintenance of the at least one self-assessed emotional state, the at least one predicted emotional state, or a combination thereof (‘117; Para 0019: the disclosure can be employed for determining an emotional state and generating alternate, enhanced, adapted, or modified workflows based on observation data associated with non-medical personnel and procedures ). It would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to modify the system of Flickinger to include device assessment generation based on determined emotional state as taught by Spenciner to facilitate enhancement or maintenance of the at least one self-assessed emotional state. Kosloski teaches determine whether the at least one predicted emotional state or the at least one self-assessed emotional state has a higher probability of being at least one actual emotional state of the user (‘730; Para 0014: the CDA can use probabilistic inference to estimate a probability that the user is talking about a particular person. If the user cannot remember the name of the person, the CDA can possibly determine that name provided that the probability/confidence is high enough. If no name is assigned a sufficiently high probability/confidence level; Para 0031: The agent receives feedback (‘reward’) for predicting each action in a form of a user's response specifying appropriateness of the current prediction; Para 0032: Based on the feedback received, the system can modify its model of the user (for example, his/her cognitive or emotional state variables) and the environment so that, in the future, the system can more accurately predict actions ) It would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to modify the system of Flickinger/Spenciner to include human memory enhancement using machine learning as taught by Kosloski to facilitate enhancement or maintenance of the at least one self-assessed emotional state. Claims 14 and 20 are rejected as the same reason with claim 1. With respect to claim 2, the combined art teaches the system of claim 1, wherein the processor is further configured to determine, by utilizing the at least one artificial intelligence model, whether a deviation between the at least one self-assessed emotional state and the at least one predicted emotional state of the user exists (‘338; Para 0053). With respect to claim 3, the combined art teaches the system of claim 2, wherein the processor is further configured to train the artificial intelligence model to facilitate a prediction for a future emotional state of the user, another user, or a combination thereof, based on the deviation if the deviation between the at least one self-assessed emotional state and the at least one predicted emotional state of the user is determined to exist (‘338; Para 0053). With respect to claim 4, the combined art teaches the system of claim 1, wherein the processor is further configured to determine the at least one predicted emotional state of the user based on identifying a correlation of the at least one feature with a pattern in the training information corresponding to at least one known emotional state. (‘338; Paras 0033, 0041). With respect to claim 5, the combined art teaches the system of claim 1, wherein the processor is further configured to compute, by utilizing the at least one artificial intelligence model, a confidence score for the at least one predicted emotional state and a confidence score for the at least one self-assessed emotional state, and to adjust content delivery based on at least one of the confidence scores (‘730; Paras 0053-0056, 0081). With respect to claim 6, the combined art teaches the system of claim 5, wherein the processor is further configured to combine at least one first characteristic of the at least one self-assessed emotional state with at least one second characteristic of the at least one predicted emotional state to define the at least one actual emotional state of the user when the confidence scores satisfy a predetermined relationship (‘730; Paras 0053-0056). With respect to claim 7, the combined art teaches the system of claim 1, wherein the processor is further configured to determine a score value relating to a mental health of the user based on analyzing a plurality of signals associated with a mood, a mental state, or a combination thereof, associated with the user, interaction data associated with the user, or a combination thereof (‘338; Para 0007, 0027, 0062). With respect to claim 8, the combined art teaches the system of claim 7, wherein the processor is further configured to determine a deviation between the score value relating to the mental health of the user and the at least one predicted emotional state, the at least one self-assessed emotional state, at least one actual emotional state, or a combination thereof (‘338; Para 0053). With respect to claim 9, the combined art teaches the system of claim 1, wherein the processor is further configured to receive additional information associated with the user, wherein the additional information comprises a plurality of markers associated with the user, wherein the plurality of markers comprise location information, demographic information, psychographic information, life event information, emotional action information, movement information, health information, audio information, virtual reality information, augmented reality information, time-related information, physical activity information, mental activity information, diet information, experience information, sociocultural information, political information, relationship information, or a combination thereof (‘338; Paras 0012, 0026, 0062, 0077 ). With respect to claim 10, the combined art teaches the system of claim 1, wherein the content associated with the at least one physical attribute, the at least one expression, or a combination thereof obtained via the at least one sensor comprises image content, video content, audio content, haptic content, vibration content, blood pressure data, sweat data, heart rate data, breath data, breathing data, glucose data, gesture data, motion data, speed data, orientation data, or a combination thereof (‘338; Paras 0043-0048). With respect to claim , the combined art teaches the system of claim 10, wherein the video content indicates at least one facial expression, at least one facial movement, or a combination thereof, and wherein the audio content indicates a rate of speech, a tone of the user, a pitch of the user, a volume of speech of the user, or a combination thereof (‘338; Para 0043-0047). With respect to claim 12, the combined art teaches the system of claim 1, wherein the processor is further configured to combine at least one first characteristic of the at least one self-assessed emotional state with at least one second characteristic of the at least one predicted emotional state to define at least one actual emotional state of the user (‘338; Para 0037). With respect to claim 13, the combined art teaches the system of claim 1, wherein the at least one self-assessed emotional state identifies an emotional state of the user as expressed in the content (‘338; Paras 0043-0046). With respect to claim 15, the combined art teaches the method of claim 14, further comprising prompting the user to identify the at least one self-assessed emotional state within the content obtained via the at least one sensor associated with device (117; Para 0021). With respect to claim 16, the combined art teaches the method of claim 14, further comprising determining a type of content to deliver to the user to enhance or maintain the at least one self-assessed emotional state, the at least one predicted emotional state, or a combination thereof (‘117; Para 0021). With respect to claim 17, the combined art teaches the method of claim 14, further comprising providing an option, via the application, to enable the user to provide information reflecting on the self-assessed emotional state, the at least one predicted emotional state, an enhancement of the predicted or self-assessed emotional state, or a combination thereof (‘338; Para 0053). With respect to claim 18, the combined art teaches the method of claim 14, further comprising generating a recommendation for an activity for the user to perform to facilitate enhancement or maintenance of the at least one self-assessed emotional state, the at least one predicted emotional state, or a combination thereof (‘338; Para 0074). With respect to claim 19, the combined art teaches the method of claim 14, further comprising requesting the user to generate, for the application, baseline content and identify at least one actual emotional state of the user as represented by the baseline content (338; Paras 0015, 0037). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEP VAN NGUYEN whose telephone number is (571)270-5211. The examiner can normally be reached Monday through Friday between 8:00AM and 5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason B Dunham can be reached on 5712728109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HIEP V NGUYEN/Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Dec 30, 2022
Application Filed
Sep 17, 2024
Non-Final Rejection — §103
Mar 18, 2025
Response Filed
Jun 26, 2025
Non-Final Rejection — §103
Dec 01, 2025
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592322
MULTI-MODAL DIGITAL COMMUNICATION ARCHITECTURE FOR PATIENT ENGAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12592323
TARGETED GENERATION OF MESSAGES FOR DIGITAL THERAPEUTICS USING GENERATIVE TRANSFORMER MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12580067
SYSTEM AND METHOD FOR DISPENSING A CUSTOMIZED NUTRACEUTICAL PRODUCT
2y 5m to grant Granted Mar 17, 2026
Patent 12573478
SYSTEM AND METHOD FOR COMMUNICATING MEDICAL DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12541784
ARTIFICIAL INTELLIGENCE BASED SYSTEM AND METHODS FOR PREDICTING SKIN ANALYTICS OF INDIVIDUALS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
55%
Grant Probability
84%
With Interview (+29.3%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 1025 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month