Prosecution Insights
Last updated: April 19, 2026
Application No. 18/476,715

ELECTRONIC DEVICE FOR PROVIDING REAL-TIME EMOTIONAL FEEDBACK TO USER'S WRITING, METHOD THEREOF AND CLASSIFICATION SEVER FOR ANALYZING REAL-TIME EMOTIONAL FEEDBACK

Final Rejection §103
Filed
Sep 28, 2023
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Ajou University Industry-Academic Cooperation Foundation
OA Round
2 (Final)
92%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 08/13/2025 is/are being considered by the examiner. Response to Amendment The Amendment filed 10/31/2025 has been entered. Claims 1-11 remain pending in the application. Claims 3 and 7 have been cancelled. Response to Arguments Applicant’s arguments, see page 7, with respect to the 35 U.S.C. 101 abstract idea rejection for claims 1-11, have been fully considered and are persuasive. Therefore, this rejection is withdrawn. Applicant’s arguments, see pages 8-9, with respect to the 35 U.S.C. 103 rejection for claims 1-11 have been fully considered but are not persuasive. With respect to the 35 U.S.C. 103 rejection of claims 1-3, 5-7, and 9 under Hwan et al. (KR20210123150A), hereinafter Hwan, in view of Dasgupta et al. (WO2021215804A1), hereinafter Dasgupta, and claims 4, 8, and 10-11 under Hwan, in view of Dasgupta, and further in view of Kyu-tae et al. (KR102199423B1), hereinafter referred to as Kyu-tae, Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Movshovitz-Attias et al. (US Patent Application Publication No. 2022/0292261), hereinafter referred to as Movshovitz-Attias, in view of Jung et al. (US Patent Application Publication No. 2014/0019885), hereinafter referred to as Jung. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-6, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Movshovitz-Attias, in view of Jung. Regarding claim 1, Movshovitz-Attias discloses an electronic device for providing real-time emotional feedback to a user’s writing, the electronic device comprising: a display (Movshovitz-Attias Fig. 9B shows multiple devices: reference characters 912, 914, 916, 918, 920, and 922) configured to display a first interface for displaying text information ("Client devices may include one or more of a desktop computer 912, a laptop or tablet PC 914, in-home devices that may be fixed (such as a temperature/thermostat unit 916) or moveable units (such as smart display 918). Other client devices may include a personal communication device such as a mobile phone or PDA 920, or a wearable device 922 such as a smart watch, head-mounted display, clothing wearable, etc," para [0100] Movshovitz-Attias) and a second interface for displaying emotion information ("Client devices may include one or more of a desktop computer 912, a laptop or tablet PC 914, in-home devices that may be fixed (such as a temperature/thermostat unit 916) or moveable units (such as smart display 918). Other client devices may include a personal communication device such as a mobile phone or PDA 920, or a wearable device 922 such as a smart watch, head-mounted display, clothing wearable, etc," para [0100] Movshovitz-Attias); and a processor configured to: transmit a first text to a classification server in real-time upon receiving a user input for entering the first text on the first interface ("In one example, computing device 902 may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm or cloud computing system, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices,... Information from emotion-related classifications performed by the server may be shared with one or more of the client computing devices, such as by suggesting an emoji, sticker or GIF to a user," Movshovitz-Attias para [0106] AND "Given a conversation, training examples may be constructed by searching for messages that were replied to by an emoji/sticker or other graphical element (either as the only reply, or as the beginning of the reply). Training examples include (i) an input message (the conversation context that preceded the emoji/sticker or other graphical element) and (ii) a target emotion (where the emoji/sticker or other graphical element acts as a representative of the expressed emotion", Movshovitz-Attias para [0029]), receive [[ a ]] first emotion information from the classification server, the first emotion information including a plurality of emotions of a virtual audience corresponding to the first text and respective generation probabilities of the plurality of emotions (Movshovitz-Attias Fig. 2E shows emotional response information with their respective probabilities AND “The output is a set of prediction scores corresponding to each of the emotions of interest (e.g., joy, amusement, gratitude, surprise, disapproval, sadness, anger, and confusion)”, Movshovitz-Attias para [0068]), and control the display to display the plurality of emotions in an order of descending generation probabilities, wherein some of the emotions with higher generation probabilities are displayed as emoticons and probabilities, and the remaining emotions are displayed as text and probabilities (Movshovitz-Attias Fig. 2E shows emotional response information displayed with both emoticons and their respective probabilities in a descending order of probability AND “The output is a set of prediction scores corresponding to each of the emotions of interest (e.g., joy, amusement, gratitude, surprise, disapproval, sadness, anger, and confusion)”, Movshovitz-Attias para [0068], it would be an obvious inclusion to portray some of the emotions with only text and their respective probability, as this would save display space and allow for easier understanding of the user), wherein the classification server infers the first emotion information corresponding to the first text by providing the first text to a model, the model being a trained model for identifying emotions of the virtual audience in response to an input text (“According to aspects of the technology, a machine learning approach is employed in order to generate effective predictions for the emotion of a user based on messages they send in a conversation (direct emotion prediction), for predicting the emotional response of a user (induced emotion prediction), and predicting appropriate graphical indicia (e.g., emoji, stickers or GIFs). Fully supervised and few-shot models are trained on data sets that may be particularly applicable to specific communication approaches (e.g., chats, texts, online commenting platforms, videoconferences, support apps, etc.)”, Movshovitz-Attias para [0097]). However, Movshovitz-Attias fails to disclose wherein the processor transmits each word of the first text in response to a trigger signal, the trigger signal being generated in response to an input of the space bar. Jung discloses a mobile terminal for inputting and editing text and chat content. Jung teaches wherein the processor transmits each word of the first text in response to a trigger signal, the trigger signal being generated in response to an input of the space bar (“If the chat content to be transmitted to the counterpart is input via the keypad or before the chat content is input, the controller 180 detects whether a specific key among the keys provided to the keypad is touched in a preset manner (S130). In this instance, the specific key may include one of a specific character key, the space bar, the enter key and the send key among the keys provided to the keypad,” Jung para [0095]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Jung’s teaching of using a special key to cause transmittal of a text. This inclusion would facilitate the automation of the emotion classification, working simultaneously with the input of the text. It would eliminate the need for manual transmittal once the entire piece of text is written, and it allows for more real-time classification of the emotion that would be induced by said text. Regarding claim 2, Movshovitz-Attias, in view of Jung, discloses all the limitations of claim 1. However, Movshovitz-Attias does not disclose wherein the processor is further configured to transmit changed text to the classification server whenever receiving the user input for modifying the first text. Jung discloses (“In addition, the preset touch manner can correspond to a touch action to which a command for activating an editing function of editing a chat content already input to the chat content input window or a chat content to be input to the chat content input window is assigned as well as a unique function previously assigned to the specific key according to an embodiment of the present invention,” Jung para [0096] AND “If the specific key is touched in the preset manner, the controller 180 activates the editing function while maintaining the touch to the specific key and then provides the activated editing function to the user. If the touch is released from the specific key, the controller 180 transmits the chat content edited while maintaining the touch to the specific key to the counterpart via the wireless communication unit 110,” Jung para [0100]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Jung’s teaching of using a special key to cause transmittal of a text. This inclusion would further facilitate the automation of the emotion classification, working simultaneously with the editing and input of the text. It would eliminate the need for manual transmittal once the entire piece of text is edited or written, and it allows for more real-time classification of the emotion that would be induced by said text. As to claim 5, method claim 5 and system claim 1 are related as system and method of using same, with each claimed element’s function corresponding to the system step. Accordingly, claim 5 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 6, method claim 6 and system claim 2 are related as system and method of using same, with each claimed element’s function corresponding to the system step. Accordingly, claim 6 is similarly rejected under the same rationale as applied above with respect to the method claim. Regarding claim 9, Movshovitz-Attias discloses a system for providing system comprising (“For induced emotion, the system focuses on predicting the emotional response of users, based on a conversation they participate in,” Movshovitz-Attias para [0029]): an electronic device comprising a first processor and a display, wherein the display is configured to display a first interface for displaying text information and a second interface for displaying emotion information (“The computing devices may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user interface subsystem for receiving input from a user and presenting information to the user (e.g., text and/or graphical indicia). The user interface subsystem may include one or more user inputs (e.g., a mouse, keyboard, touch screen and/or microphone) and one or more display devices (e.g., a monitor having a screen or any other electrical device that is operable to display information (e.g., text and graphical indicia),” Movshovitz-Attias para [0104]); and a classification server comprising a second processor and a model, wherein the model is trained to identify emotions of a virtual audience in response to an input text (“According to aspects of the technology, a machine learning approach is employed in order to generate effective predictions for the emotion of a user based on messages they send in a conversation (direct emotion prediction), for predicting the emotional response of a user (induced emotion prediction), and predicting appropriate graphical indicia (e.g., emoji, stickers or GIFs). Fully supervised and few-shot models are trained on data sets that may be particularly applicable to specific communication approaches (e.g., chats, texts, online commenting platforms, videoconferences, support apps, etc.)”, Movshovitz-Attias para [0097]); wherein the first processor is configured to: transmit a first text to the classification server in real-time upon receiving a user input for entering the first text on the first interface ("In one example, computing device 902 may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm or cloud computing system, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices,... Information from emotion-related classifications performed by the server may be shared with one or more of the client computing devices, such as by suggesting an emoji, sticker or GIF to a user," Movshovitz-Attias para [0106] AND "Given a conversation, training examples may be constructed by searching for messages that were replied to by an emoji/sticker or other graphical element (either as the only reply, or as the beginning of the reply). Training examples include (i) an input message (the conversation context that preceded the emoji/sticker or other graphical element) and (ii) a target emotion (where the emoji/sticker or other graphical element acts as a representative of the expressed emotion", Movshovitz-Attias para [0029]), receive first emotion information from the classification server, the first emotion information including a plurality of emotions of a virtual audience corresponding to the first text and respective generation probabilities of the plurality of emotions (Movshovitz-Attias Fig. 2E shows emotional response information with their respective probabilities AND “The output is a set of prediction scores corresponding to each of the emotions of interest (e.g., joy, amusement, gratitude, surprise, disapproval, sadness, anger, and confusion)”, Movshovitz-Attias para [0068]), and control the display to display the plurality of emotions in an order of descending generation probabilities, wherein some of the emotions with higher generation probabilities are displayed as emoticons and probabilities, and the remaining emotions are displayed as text and probabilities (Movshovitz-Attias Fig. 2E shows emotional response information displayed with both emoticons and their respective probabilities in a descending order of probability AND “The output is a set of prediction scores corresponding to each of the emotions of interest (e.g., joy, amusement, gratitude, surprise, disapproval, sadness, anger, and confusion)”, Movshovitz-Attias para [0068], it would be an obvious inclusion to portray some of the emotions with only text and their respective probability, as this would save display space and allow for easier understanding of the user), wherein the second processor is configured to: receive [[ a ]] the first text ("In one example, computing device 902 may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm or cloud computing system, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices,... Information from emotion-related classifications performed by the server may be shared with one or more of the client computing devices, such as by suggesting an emoji, sticker or GIF to a user," Movshovitz-Attias para [0106]), provide the first text to the model for inferring the first emotion information, and ("In one example, computing device 902 may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm or cloud computing system, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices,... Information from emotion-related classifications performed by the server may be shared with one or more of the client computing devices, such as by suggesting an emoji, sticker or GIF to a user," Movshovitz-Attias para [0106] AND "Given a conversation, training examples may be constructed by searching for messages that were replied to by an emoji/sticker or other graphical element (either as the only reply, or as the beginning of the reply). Training examples include (i) an input message (the conversation context that preceded the emoji/sticker or other graphical element) and (ii) a target emotion (where the emoji/sticker or other graphical element acts as a representative of the expressed emotion", Movshovitz-Attias para [0029]), transmit the first emotion (Information from emotion-related classifications performed by the server may be shared with one or more of the client computing devices, such as by suggesting an emoji, sticker or GIF to a user," Movshovitz-Attias para [0106]). However, Movshovitz-Attias fails to disclose wherein the first processor transmits each word of the first text in response to a trigger signal, the trigger signal being generated in response to an input of the space bar. Jung teaches wherein the first processor transmits each word of the first text in response to a trigger signal, the trigger signal being generated in response to an input of the space bar (“If the chat content to be transmitted to the counterpart is input via the keypad or before the chat content is input, the controller 180 detects whether a specific key among the keys provided to the keypad is touched in a preset manner (S130). In this instance, the specific key may include one of a specific character key, the space bar, the enter key and the send key among the keys provided to the keypad,” Jung para [0095]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Jung’s teaching of using a special key to cause transmittal of a text. This inclusion would facilitate the automation of the emotion classification, working simultaneously with the input of the text. It would eliminate the need for manual transmittal once the entire piece of text is written, and it allows for more real-time classification of the emotion that would be induced by said text. Claim(s) 4, 8, and 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Movshovitz-Attias, in view of Jung, and further in view of Kyu-tae et al. (KR102199423B1), hereinafter referred to as Kyu-tae. Regarding claim 4, Movshovitz-Attias, in view of Jung, discloses all the limitations of claim 1. Movshovitz-Attias fails to disclose wherein the model is a bi-directional Long Short-Term Memory (LSTM) model. Kyu-tae teaches a method for the machine learning of psychological counseling data. Kyu-tae teaches wherein the model is a bi-directional Long Short-Term Memory (LSTM) model ("The automatic dialogue device (700) can be used by adding a module called Long Short-Term Memory models (LSTM) or Gated Recurrent Unit (GRU) to Recurrent Neural Networks (RNN) for machine learning…, Additionally, one-way machine learning can be performed bidirectionally," Kyu-tae para [0101]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Kyu-tae’s method of a bi-directional Long Short-Term Memory (LSTM) model. This inclusion would make it more efficient of capturing emotional context from the text. Bi-directional LSTMs allow for processing of sequential data in both directions, which allows for the capture of both past and future context. This would improve the accuracy of the emotions and sentiments captured from the analysis of the text. This combination would have been obvious to one of ordinary skill in the art. As to claim 8, method claim 8 and system claim 4 are related as system and method of using same, with each claimed element’s function corresponding to the system step. Accordingly, claim 8 is similarly rejected under the same rationale as applied above with respect to the method claim. Regarding claim 10, Movshovitz-Attias, in view of Jung, discloses all the limitations of claim 9. Movshovitz-Attias fails to disclose wherein the model is a bi-directional Long Short-Term Memory (LSTM) model. Kyu-tae teaches wherein the model is a bi-directional Long Short-Term Memory (LSTM) model ("The automatic dialogue device (700) can be used by adding a module called Long Short-Term Memory models (LSTM) or Gated Recurrent Unit (GRU) to Recurrent Neural Networks (RNN) for machine learning…, Additionally, one-way machine learning can be performed bidirectionally," Kyu-tae para [0101]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Kyu-tae’s method of a bi-directional Long Short-Term Memory (LSTM) model. This inclusion would make it more efficient of capturing emotional context from the text. Bi-directional LSTMs allow for processing of sequential data in both directions, which allows for the capture of both past and future context. This would improve the accuracy of the emotions and sentiments captured from the analysis of the text. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 11, Movshovitz-Attias, in view of Jung, and further in view of Kyu-tae, discloses all the limitations of claim 10. Movshovitz-Attias further discloses wherein the second processor is configured to: identify emotions of the virtual audience based on contributions of the words included in the text (“Here, the system finds examples which are based on emotion-bearing phrases. The goal is to identify emotion-bearing text based on the phrases found in phase 1, and learn to classify these messages,” Movshovitz-Attias para [0051]). Movshovitz-Attias fails to disclose identify a relationship between words included in the text based on bi-directional LSTMs included in the LSTM model analyzing texts in different directions. Kyu-tae teaches identify a relationship between words included in the text based on bi-directional LSTMs included in the LSTM model analyzing texts in different directions ("The automatic dialogue device (700) can be used by adding a module called Long Short-Term Memory models (LSTM) or Gated Recurrent Unit (GRU) to Recurrent Neural Networks (RNN) for machine learning…, Additionally, one-way machine learning can be performed bidirectionally," Kyu-tae para [0101]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Movshovitz-Attias’ method of emotion classification in text by including Kyu-tae’s method of a bi-directional Long Short-Term Memory (LSTM) model. This inclusion would make it more efficient of capturing emotional context from the text. Bi-directional LSTMs allow for processing of sequential data in both directions, which allows for the capture of both past and future context. This would improve the accuracy of the emotions and sentiments captured from the analysis of the text. This combination would have been obvious to one of ordinary skill in the art. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jul 23, 2025
Non-Final Rejection — §103
Oct 31, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month