Prosecution Insights
Last updated: April 19, 2026
Application No. 18/637,163

SYSTEMS AND METHODS FOR PROACTIVELY PROVIDING EMOTIONALLY INTELLIGENT INTERACTION GUIDANCE USING A MACHINE LEARNING FRAMEWORK

Non-Final OA §101§103
Filed
Apr 16, 2024
Examiner
BAHL, SANGEETA
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Wells Fargo Bank N A
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
40%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
93 granted / 452 resolved
-31.4% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
40 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
37.6%
-2.4% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is a Non-Final Office Action in response to communications received on 2/18/26. Claims 1, 6-8, 11, 16-18 and 20 have been amended. Claims 1-20 are now pending and have been addressed below. Response to Amendment Applicant has amended Claims 6 and 16 to overcome the claim objection. Examiner withdraws the claim objection with respect to these and all depending claims unless otherwise indicated. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/18/26 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more. Step 1: Identifying Statutory Categories In the instant case, claims 1-10 are directed to a method, claim 20 is directed to a non-transitory medium and claims 11-19 are directed to a system. Thus, the claims fall within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A: Prong 1 Identifying a Judicial Exception Under Step 2A, prong 1, Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea without significantly more. Independent claims 1, 11 and 20 recite methods for providing emotionally intelligent interaction guidance for completing user action request including determining that a user has entered a physical environment; receiving in response to determining that the user has entered the physical environment, media pertaining to the user; determining based on the media pertaining to the user, the user action request; inferring, an emotional classification for the user based on the received media, wherein the inferred emotional classification is associated with a probability that the user possesses an emotion corresponding to the inferred emotional classification; generating the emotionally intelligent interaction guidance for completing the user action request based on the inferred emotional classification and the user action request, wherein the emotionally intelligent interaction guidance indicates the inferred emotional classification and a recommended action for interacting with the user; and provide the emotionally intelligent interaction guidance for completing the user action request. These limitations as drafted, are a process that, under its broadest reasonable interpretation, covers methods of organizing human activity (including commercial interactions such as business relations, managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) including interaction between person and computer), but for the recitation of generic computer components. That is, other than reciting the structural elements (such as an event detection circuitry, communication hardware, an emotion analysis circuitry, an emotional intelligence machine learning model, a guidance circuitry, using a guidance machine learning model, an entity device), the claims are directed to providing emotionally intelligent interaction guidance. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity but for the recitation of generic computer components, the claim recites an abstract idea. Step 2A Prong 2 - This judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of receiving data, analyzing it, and providing guidance. In particular, the claims only recites the additional element – an event detection circuitry, communication hardware, an emotion analysis circuitry, an emotional intelligence machine learning model, a guidance circuitry, using a guidance machine learning model, an entity device. The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Further, the limitation of “using the emotionally intelligent/guidance machine learning model” is simply application of a computer model, itself an abstract idea. Furthermore, such applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. When considered in combination, the claims do not amount to improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a), applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b), effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c), or applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they does not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. Step 2B: Considering Additional Elements The claimed invention is directed to an abstract idea without significantly more. The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” to; provide emotionally intelligent guidance. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are not patent eligible. The dependent claim(s) when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail to establish that the claim(s) is/are not directed to an abstract idea. The dependent claims are not significantly more because they are part of the identified judicial exception. See MPEP 2106.05(g). The claims are not patent eligible. With respect to the event detection circuitry, communication hardware, an emotion analysis circuitry, an emotional intelligence machine learning ,model, a guidance circuitry, using a guidance machine learning model, an entity device, these limitations are described in Applicant’s own specification as generic and conventional elements. See Applicants specification, Fig 2 #202-212 and [0041] event detection circuitry 208, emotion analysis circuitry 210, and guidance circuitry 212 may include one or more dedicated processor. [0008], [0009] machine learning models.” These are basic computer elements applied merely to carry out data processing such as, discussed above, receiving, analyzing, transmitting and displaying data. Furthermore, the use of such generic computers to receive or transmit data over a network has been identified as a well understood, routine and conventional activity by the courts. See Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AVAuto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93, OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Also see MPEP 2106.05(d) discussing elements that the courts have recognized as well-understood, routine and conventional activities in particular fields. Lastly, the additional elements provides only a result-oriented solution which lacks details as to how the computer performs the claimed abstract idea. Therefore, the additional elements amount to mere instructions to apply the exception. See MPEP 2106.05(f). Furthermore, these steps/components are not explicitly recited and therefore must be construed at the highest level of generality and amount to mere instructions to implement the abstract idea on a computer. Therefore, the claimed invention does not demonstrate a technologically rooted solution to a computer-centric problem or recite an improvement to another technology or technical field, an improvement to the function of any computer itself, applying the exception with, or by use of, a particular machine, effect a transformation or reduction of a particular article to a different state or thing, add a specific limitation other than what is well-understood, routine and conventional in the field, add unconventional steps that confine the claim to a particular useful application, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment such as computing. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. Taking the additional claimed elements individually and in combination, the computer components at each step of the process perform purely generic computer functions. Viewed as a whole, the claims do not purport to improve the functioning of the computer itself, or to improve any other technology or technical field. Use of an unspecified, generic computer does not transform an abstract idea into a patent-eligible invention. Thus, the claims do not amount to significantly more than the abstract idea itself. Dependent claims 2-10, 12-19, add additional limitations, but these only serve to further limit the abstract idea, and hence are nonetheless directed towards fundamentally the same abstract idea of Independent claims 1 & 11. Claims 2-4,12-14 recites extract, one or more user characteristics from the received media; and determine, using the emotional intelligence machine learning model, a probability for a candidate emotional classification based on the one or more user characteristics, wherein the inferred emotional classification is also determined based on a corresponding probability for the candidate emotional classification; determine, using the emotional intelligence machine learning model, a probability for a candidate core emotion based on the one or more user characteristics, wherein determining the probability for the one or more candidate emotional classifications is based on the probability determined for the one or more candidate core emotions; wherein the one or more user characteristics comprises one or more of a user facial expression, user body language, a user gesture, a user voice tone, a user voice volume, a user speech speed, a user speech patterns, user eye contact behavior, user speech text, or user physiological responses. These limitations further limit the abstract idea. The limitation of using a preprocessing model merely adds the words apply it (or an equivalent) with the judicial exception , or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea as discussed in MPEP 2106.05(f). The claims do not provide any new additional elements beyond abstract idea. Therefore, whether analyzed individually or as an ordered combination, they fail to integrate the abstract idea into a practical application or provide significantly more than the abstract idea. Claims 5-10, 15-19 recites determine, using the guidance machine learning model, one or more candidate actions; determine, using the guidance machine learning model, an inferred emotional; responsiveness classification for each of the one or more candidate actions; and select, using the guidance machine learning model, one or more of the one or more; candidate actions based on a comparison between the inferred emotional classification and the inferred emotional responsiveness classification for each of the one or more candidate actions, wherein the emotionally intelligent interaction guidance comprises the selected one or more candidate actions; determine, using the guidance machine learning model, an escalation event based the inferred emotional classification for the user, wherein the emotionally intelligent interaction guidance is further indicative of the escalation event, and generate an escalation alert indicative of the escalation event, provide the escalation alert to a second entity device different than the entity device; provide verbal cues; receive updated media pertaining to the user; the emotion analysis circuitry is further configured to determine, using an emotional intelligence machine learning model, an updated inferred emotional classification for the user based on the received updated media; generate, using the guidance machine learning model, updated emotionally intelligent interaction guidance based on the updated inferred emotional classification; and the communications hardware is further configured to provide the updated emotionally intelligent interaction guidance to the entity device. cause one or more changes within the environment based on the inferred emotional classification; determine a user identity of the user based on the received media; and identify a user account for the user based on the user identity, wherein (a) the user account includes one or more of user preferences, user life events, or historical user interaction events and (b) the recommended action is generated based on the user account. These limitations further limit the abstract idea. The claims do not provide any new additional elements beyond abstract idea. Therefore, whether analyzed individually or as an ordered combination, they fail to integrate the abstract idea into a practical application or provide significantly more than the abstract idea. Therefore, dependent claims do not integrate into a practical application. As such, the additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words “apply it” (or equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (See MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are merely used to apply the abstract idea to a technological environment. These limitations do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. See MPEP 2106.05d. Thus, the claims do not add significantly more to an abstract idea. The claims are ineligible. Therefore, since there are no limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. See (Alice Corporation Pty. Ltd. v. CLS Bank International, et al.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-15, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Balasubramaniam (US 10,896,428 B1) in view of Sanjay et al. (US 10.558.862 B2). Regarding Claims 1, 11 and 20, Balasubramaniam discloses the method for providing emotionally intelligent interaction guidance for completing a user action request (Abstract lines 1-2 analyzing aspects of speech from a customer contact and generating dynamic output based on the analysis.), the method comprising: Balasubramaniam discloses detecting, by event detection circuitry, a user interaction event for a user within an environment (Col9 lines 36-40 The process 300 may begin in response to an event, such as when the dynamic contact management system 100 begins operation, or receives initiation of a customer contact.); Balasubramaniam discloses receiving, by communications hardware, media pertaining to the user (Col 9 lines 53-56 At block 304, the contact manager 110 or some other module or component of the dynamic contact management system 100 can receive audio data associated with a current customer contact. The audio data may represent a voice of a customer and/or a voice of an agent.); determining based on the media pertaining to the user, the user action request (Col 2 lines 20-28 The contact may be initiated by, received from, or otherwise include communication with a customer or other entity. The contact may take any of a variety of forms. For example, a contact may include a conversation (e.g., a telephone call, video call, online voice chat, etc.) initiated by a customer regarding an issue with a product. Customer contacts may be initiated for any of a variety of reasons, such as contacts for refunds, replacements, exchanges, service requests, and the like. Col 7 lines 1-17 the contact may be initiated as a voice call from the customer device 102 to the dynamic contact management system 100, and the dynamic contact management system 100 may obtain or generate metadata indicating the contact is in relation to a particular item, such as an item the customer purchased and for which the customer would like a refund or exchange. (user action request)) Balasubramaniam discloses inferring, by an emotion analysis circuitry and using an emotional intelligence machine learning model, an emotional classification for the user based on the received media (Col 11 lines 57-67The sentiment analysis techniques described above are illustrative only, and are not intended to be limiting. In some embodiments, other techniques (e.g., other specific sentiment analysis algorithms, general machine learning approaches may be used. The sentiment feature data generated by the sentiment analyzer 114 may represent a classification of a sentiment that most likely corresponds to the text data. For example, the sentiment feature data may include a particular value, such as an index that corresponds to a particular sentiment within a range of sentiments., Col 5 lines 32-40The dynamic contact management system 100 may also include a sentiment analyzer 114 for determining a sentiment of an utterance or set of utterances. The dynamic contact management system 100 may also include a speech analyzer 116 for determining an emotion of speech. The dynamic contact management system 100 may also include a score generator 118 for generating customer state scores and/or agent state scores using information provided by other subsystems of the dynamic contact management system 100., Col 13 lines 48-54The model may differentiate between a set of emotions, such as anger, boredom, disgust, anxiety, happiness, sadness, and neutral. In some embodiments, the model may different between “opposing” emotional states from a set of “opposing” emotion pairs, such as despair/elation, happiness/sadness, interest/boredom, shame/pride, hot-anger/elation, and cold-anger/sadness.), wherein the inferred emotional classification is associated with a probability that the user possesses an emotion corresponding to the inferred emotional classification (Col 11 lines 62-67The sentiment feature data generated by the sentiment analyzer 114 may represent a classification of a sentiment that most likely corresponds to the text data. For example, the sentiment feature data may include a particular value, such as an index that corresponds to a particular sentiment within a range of sentiments. Fig 3 #320 generate state score data, Col 14 lines 54-58 At block 320, the score generator 118 or some other module or component of the dynamic contact management system 100 can generate state score data, such as data representing a state score or a vector of state scores, using the input vector generated above. ); Balasubramaniam discloses generating, by a guidance circuitry and using a guidance machine learning model, the emotionally intelligent interaction guidance for completing the user action request based on the inferred emotional classification and the user action request (Abstract lines 4-5the system can generate scores for use in dynamically determining which actions to take, updating displays, analyzing contact outcomes over time, etc., Col 2 lines 2-15 For example, a score may indicate a degree to which a customer's vocal pitch and word choice represent satisfaction with service provided by an agent. As another example, a score may indicate a degree to which an agent's vocal pitch and word choice represent a degree to which the agent is experiencing stress in providing service. Based on one or more of the scores, the system can present dynamic, real-time information to an agent regarding the state of the customer, recommend statements or other interactions with the customer, and the like. Col 8 lines 15-28, 40-44 the contact manager 110 may analyze customer state scores using a model that correlates customer state scores to recommended interactions. The model may use, as input, one or more customer state scores, agent state scores, text data representing text of customer utterances, historical customer account data regarding prior customer contacts associated with this customer, other information, some combination thereof, etc. When a recommended interaction is identified, the contact manager 110 may generate graphical user interface data that causes a representation of the recommended interaction to be displayed on the agent device 104.The recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent. Fig 4 #420 interaction recommendation data), wherein the emotionally intelligent interaction guidance indicates the inferred emotional classification and a recommended action for interacting with the user (Fig 2A # 208 interaction recommendation, Col 3 lines 32-50 customer state scores may be generated continuously or periodically during the course of a customer contact. A visual display may be generated that summarizes the various scores such that changes and trends in the scores can be identified. The customer state scores may also or alternatively be used to recommend what an agent should say to a customer. For example, if a customer state score indicates a low level of satisfaction with the customer contact, then the customer service system can recommend certain questions or statements that may uncover the cause of the low level of satisfaction or aid in raising the level of satisfaction., Col 8 lines 31-44 FIG. 2A, the interface 200 may include a recommended interaction section 208 that presents information representing interactions identified by the contact manager 110. For example, the recommended interaction section 208 may present a question or comment that is recommended by the contact manager 110, such as a question intended to elicit from the customer information that may help the agent in improving the experience for the customer (e.g., as represented by a subsequent customer state score). As another example, the recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent.) ; and Balasubramaniam discloses providing, by the communications hardware, the emotionally intelligent interaction guidance for completing the user action request to an entity device (Col 8 lines 11-14, 24-26, 31-44 the contact manager 110 may use the customer state score to determine a question, comment, or other interaction that may be used by the agent to advance the customer contact toward a satisfactory resolution. For example, the contact manager 110 may analyze customer state scores using a model that correlates customer state scores to recommended interactions. When a recommended interaction is identified, the contact manager 110 may generate graphical user interface data that causes a representation of the recommended interaction to be displayed on the agent device 104. FIG. 2A, the interface 200 may include a recommended interaction section 208 that presents information representing interactions identified by the contact manager 110. For example, the recommended interaction section 208 may present a question or comment that is recommended by the contact manager 110, such as a question intended to elicit from the customer information that may help the agent in improving the experience for the customer (e.g., as represented by a subsequent customer state score). As another example, the recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent. Fig 2A #208 interaction recommendation, Fig 4 #420 interaction recommendation data based on customer score data, Col 16 lines 18-30 If the customer state score 402 is indicative of a particular emotional state, such as anger or despair, then the interaction manager 406 may determine that a particular interaction is to be recommended to the agent handling the customer contact, such as a statement or question designed to elicit a response from the customer that improves the customer's emotional state or otherwise improves the likelihood of a positive resolution. The recommended interaction data 420 may represent this interaction (e.g., the recommended interaction data 420may represent a string of words to be spoken by the agent)). Balasubramaniam does not specifically teach circuitry configured to determine that a user has entered a physical environment; receiving, in response to determining that the user has entered the physical environment, media pertaining to the user Sanjay teaches circuitry configured to determine that a user has entered a physical environment Col 10 lines 16-20, 28-36 For instance, within a retail store, the locations and movement of particular consumers within the retail store environment may be detected and tracked, using sensor data 235b as inputs, such as video sensor data (and other image data), infrared (IR) sensor data, voice sensor data. These recognition techniques may be combined with localization processing to determine when a particular person enters or exits a particular area, or location, within the store, as well as how and where the particular person moves while in the store, among other example results. Accordingly, an example localization engine 250 may generate data to detect a particular person and that person's location(s) detected within a physical environment.); receiving, in response to determining that the user has entered the physical environment, media pertaining to the user (Col 10 lines 21-28 The localization engine 250 may determine the unique identities of the people within the store (while applying security measures to anonymize the data 235b) using facial recognition, voice recognition, and other recognition techniques (e.g., based on the clothing of the person, the gait or other gestures used by the person, and other identifying characteristics captured in the sensor data 235b, Col 14 lines 50-60 raw sensor data 235 may be provided such as input image frames, speech signals, reflected RF signals, heart beat signals, respiration signals, etc., which may be preprocessed to denoise and otherwise transform the received data. The pre-process input data stream is then used for person detection. Subsequently, the gesture, human emotion related body variation (e.g. heartbeat, respiration), and speech are extracted from the detected person. For instance, sensor data may be processed for gesture detection 325, posture detection 330, speech detection 335, and other modules to isolate various features observed in the sensor data. The significant emotion features are selected (e.g., using feature selection module 210) via feature extraction/machine learning approach); inferring, by an emotion analysis circuitry and using an emotional intelligence machine learning model, an emotional classification for the user based on the received media (Col 14 lines 58-62The significant emotion features are selected (e.g., using feature selection module 210) via feature extraction/machine learning approach and may be sent to additional processes or systems for further emotion classification and heat mapping. Col 15 lines 21-29 feature selection 210 may produce feature vectors which may be provided to emotion detection or classification logic (e.g., 255) for determining one or more emotions exhibited by the feature vector(s) or other feature data generated from sensor data 235. Feature data may be provided for processing by the emotion detection logic 255 in connection with a pre-trained model (e.g., 265) used during emotion detection to classify the emotion variation of the subject persons.), providing, by the communications hardware, the emotionally intelligent interaction guidance (Col 16 lines 13-20 generate human-readable notifications, or trigger other actions, which may allow a manager of the physical environment to initiate or respond (in an automated or manual manner) to address a situation evidenced by the emotion heat map (e.g., interact with the customers, adjust ambient characteristics of designated regions of the environment, etc.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included circuitry configured to determine that a user has entered a physical environment; receiving, in response to determining that the user has entered the physical environment, media pertaining to the user, as disclosed by Sanjay in the system disclosed by Balasubramaniam, for the motivation of providing a method of aggregating customer emotion information embodied in emotion heat map data that can be compared and transformed into business insights analytics, which may be used to ease subsequent business decisions. (Col 15 lines 50-54 Sanjay) Claim 11. Balasubramaniam discloses the apparatus for providing emotionally intelligent interaction guidance (Col 19 lines 3-10 computing device 600 configured to implement some or all of the functionality of the dynamic contact management system 100. In some embodiments, as shown, the computing device 600 may include: one or more computer processors 602, such as physical central processing units (“CPUs”); one or more network interfaces 604, such as a network interface cards (“NICs”); one or more computer readable medium drives 606), the apparatus comprising: Claim 20. Balasubramaniam discloses the computer program product for providing emotionally intelligent interaction guidance, the computer program product comprising at least one non-transitory computer-readable storage medium (Col 19 lines 18-25 computer readable memory) storing software instructions that, when executed, cause an apparatus to Regarding Claims 2 and 12, Balasubramaniam as modified by Sanjay teaches the method of claim 1 and system of claim 11, further comprising: Balasubramaniam teaches extracting, by the emotion analysis circuitry and using a preprocessing model, one or more user characteristics from the received media (Col 9 lines 55-67 receive audio data associated with a current customer contact. The audio data may represent a voice of a customer and/or a voice of an agent. The audio data may be received as a continuous stream (e.g., in substantially real-time during a customer contact), or as a discrete set of audio data. The contact manager 110 can receive the phone call or be notified of receipt of the phone call. The contact manager 110 may extract metadata from or associated with the telephone call, prompt the customer for information regarding the contact, or perform some other operation to determine information regarding the customer and subject of the contact. Col 5 lines 32-40The dynamic contact management system 100 may also include a sentiment analyzer 114 for determining a sentiment of an utterance or set of utterances. The dynamic contact management system 100 may also include a speech analyzer 116 for determining an emotion of speech, Col 13 lines 55-60The speech feature data generated by the speech analyzer 116 may represent a classification of an emotion that most likely corresponds to the voice represented by the audio data. For example, the speech feature data may include a particular value, such as an index that corresponds to a particular emotion within a range of emotions. I); and Balasubramaniam teaches determining, by the emotion analysis circuitry and using the emotional intelligence machine learning model, a probability for a candidate emotional classification based on the one or more user characteristics (Col 13 lines 61-67, Col 14 lines1-5 the speech feature data may include multiple individual values, such as a separate value for each of multiple emotions. For example, the speech analyzer 116 may be configured to detect n different emotions, where n is an integer that corresponds to the number of emotions detectible by the speech analyzer 116, or some subset thereof. The speech feature data may therefore include n different values, with each value representing a degree to which a different emotion of the n different emotions corresponds to the voice represented by the audio data.), wherein the inferred emotional classification is also determined based on a corresponding probability for the candidate emotional classification (Col 14 lines 5-10 For example, a first value that corresponds to “anger” may be a floating-point value between 0.0 and 1.0, where a lower value indicates a lower likelihood that the speaker of the analyzed speech is angry, and a higher value indicates a higher likelihood that the speaker of the analyzed speech is angry.). Regarding Claims 3 and 13, Balasubramaniam as modified by Sanjay teaches the method of claim 2 and system of claim 12, further comprising: Balasubramaniam teaches determining, by the emotion analysis circuitry and using the emotional intelligence machine learning model, a probability for one or more candidate core emotions based on the one or more user characteristics (Col 14 lines 5-10 For example, a first value that corresponds to “anger” may be a floating-point value between 0.0 and 1.0, where a lower value indicates a lower likelihood that the speaker of the analyzed speech is angry, and a higher value indicates a higher likelihood that the speaker of the analyzed speech is angry. Col 3 lines 11-17 The customer service system may also analyze the speech data using a speech analysis subsystem to determine an emotion of the speaker from the audio characteristics of their speech (e.g., pitch).(user characteristic))., Balasubramaniam teaches wherein determining the probability for the one or more candidate emotional classifications is based on the probability determined for the one or more candidate core emotions. (Col 3 lines 11-17 The customer service system may also analyze the speech data using a speech analysis subsystem to determine an emotion of the speaker from the audio characteristics of their speech (e.g., pitch). The customer service system can use the determined sentiment and emotion to generate a state score representative of a current state of the speaker (e.g., a customer state score for a customer. The particular state that is identified and represented by the state score may be degree of stress or an emotional state. Col 13 lines 20-25 the speech analyzer 116 can analyze the feature vectors with respect to a model trained to classify the feature vectors into particular classifications, such as classifications associated with particular emotions. Col 13 lines 58-64 the speech feature data may include a particular value, such as an index that corresponds to a particular emotion within a range of emotions. In some embodiments, the speech feature data may include multiple individual values, such as a separate value for each of multiple emotions. Col 13 lines 65-67, Col 14 lines 1-10 The speech feature data may therefore include n different values, with each value representing a degree to which a different emotion of the n different emotions corresponds to the voice represented by the audio data. The specific values may represent likelihoods that the speech is representative of corresponding emotions. For example, a first value that corresponds to “anger” may be a floating-point value between 0.0 and 1.0, where a lower value indicates a lower likelihood that the speaker of the analyzed speech is angry, and a higher value indicates a higher likelihood that the speaker of the analyzed speech is angry.) Regarding Claims 4 and 14, Balasubramaniam as modified by Sanjay teaches the method of claim 2 and system of claim 12, Balasubramaniam teaches wherein the one or more user characteristics comprises one or more of a user facial expression, user body language, a user gesture, a user voice tone, a user voice volume, a user speech speed, a user speech patterns, user eye contact behavior, user speech text, or user physiological responses. (Col 13 lines 28-36 The speech analyzer 116 may use different models in different contexts. In this way, the speech analyzer 116 may accurately determine speech features (e.g., emotion classifications) across different contexts, when a general model may not accurately determine speech features in specific contexts. The different models may be designed for analyzing speech that has different speech characteristics, such as speaking rate, prosody, pitch, volume, language, dialect, accent, some combination thereof, etc. Regarding Claims 5 and 15, Balasubramaniam as modified by Sanjay teaches the method of claim 1 and system of claim 11, further comprising: Balasubramaniam teaches determining, by the guidance circuitry and using the guidance machine learning model, one or more candidate actions (Col 8 lines 34-44 FIG. 2A, the interface 200 may include a recommended interaction section 208 that presents information representing interactions identified by the contact manager 110. For example, the recommended interaction section 208 may present a question or comment that is recommended by the contact manager 110, such as a question intended to elicit from the customer information that may help the agent in improving the experience for the customer (e.g., as represented by a subsequent customer state score). As another example, the recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent.); Balasubramaniam teaches determining, by the guidance circuitry and using the guidance machine learning model, an inferred emotional responsiveness classification for each of the one or more candidate actions (Col 16 lines 14-33 he interaction manager 406 may analyze a most-recently generated customer state score 402 with respect to a model or a set of rules that correlates state scores to recommended interactions. If the customer state score 402 satisfies a threshold or matches some other criterion, alone or in combination with other data, then the interaction manager 406 may identify a recommended interaction. For example, if the customer state score 402 is indicative of a particular emotional state, such as anger or despair, then the interaction manager 406 may determine that a particular interaction is to be recommended to the agent handling the customer contact, such as a statement or question designed to elicit a response from the customer that improves the customer's emotional state or otherwise improves the likelihood of a positive resolution); and Balasubramaniam teaches selecting, by the guidance circuitry and using the guidance machine learning model, at least one of the one or more candidate actions based on a comparison between the inferred emotional classification and the inferred emotional responsiveness classification for each of the one or more candidate actions, wherein the emotionally intelligent interaction guidance comprises the selected one or more candidate actions. (Col 16 lines 27-33For example, if the customer state score 402 is indicative of a particular emotional state, such as anger or despair, then the interaction manager 406 may determine that a particular interaction is to be recommended to the agent handling the customer contact, such as a statement or question designed to elicit a response from the customer that improves the customer's emotional state or otherwise improves the likelihood of a positive resolution. The recommended interaction data 420 may represent this interaction (e.g., the recommended interaction data 420 may represent a string of words to be spoken by the agent). The recommended interaction data 420 may be predetermined data (e.g., curated data provided by system administrators) that is stored in and retrieved from an interactions data store 412 accessible to the interaction manager 406, Col 16 lines 45-55 the interaction manager 406 may use multiple customer state scores 402, such as a set of the n most recently generated customer state scores 402 for the current customer contact. Therefore, the interaction manager 406 can use a model or set of rules that correlates recommended interactions (candidate action) with changes in customer state scores (emotional classification) over time.) Regarding Claims 7 and 17, Balasubramaniam as modified by Sanjay teaches the method of claim 1 and system of claim 11, further comprising: Balasubramaniam teaches for a duration of a user interaction event: receiving, by the communications hardware, updated media pertaining to the user (Col 9 lines 53-62 At block 304, the contact manager 110 or some other module or component of the dynamic contact management system 100 can receive audio data associated with a current customer contact. The audio data may represent a voice of a customer and/or a voice of an agent. The audio data may be received as a continuous stream (e.g., in substantially real-time during a customer contact)); Balasubramaniam teaches determining, by the emotion analysis circuitry and using the emotional intelligence machine learning model, an updated inferred emotional classification for the user based on the received updated media (Col 16 lines 48-54 the interaction manager 406 may use multiple customer state scores 402, such as a set of the n most recently generated customer state scores 402 for the current customer contact., Col 3 lines 33-45) A visual display may be generated that summarizes the various scores such that changes and trends in the scores can be identified. For example, an agent may be presented with a visual display of customer state scores during a customer contact so that the agent can be informed of the current state of the customer and any changes in the customer's state. As another example, the different customer state scores can be correlated by time with the specific words or other interactions that occurred at the time so that possible causes for changes in state can be identified. For example, if a customer state score indicates a low level of satisfaction with the customer contact, then the customer service system can recommend certain questions or statements that may uncover the cause of the low level of satisfaction or aid in raising the level of satisfaction.); Balasubramaniam teaches generating, by the guidance circuitry and using the guidance machine learning model, updated emotionally intelligent interaction guidance based on the updated inferred emotional classification (Col 16 lines 48-54 the interaction manager 406 may use multiple customer state scores 402, such as a set of the n most recently generated customer state scores 402 for the current customer contact. Therefore, the interaction manager 406 can use a model or set of rules that correlates recommended interactions with changes in customer state scores over time.; and providing, by the communications hardware, the updated emotionally intelligent interaction guidance to the entity device. (Col 16 lines 20-35 if the customer state score 402 is indicative of a particular emotional state, such as anger or despair, then the interaction manager 406 may determine that a particular interaction is to be recommended to the agent handling the customer contact, such as a statement or question designed to elicit a response from the customer that improves the customer's emotional state or otherwise improves the likelihood of a positive resolution. The recommended interaction data 420 may represent this interaction (e.g., the recommended interaction data 420 may represent a string of words to be spoken by the agent). The recommended interaction data 420 may be predetermined data (e.g., curated data provided by system administrators) that is stored in and retrieved from an interactions data store 412 accessible to the interaction manager 406., Col 7 lines 52-56 the contact manager 110 may use the customer state score to update a graphical user interface that is presented on the agent device 104.) Regarding Claims 8 and 18, Balasubramaniam discloses the method of claim 1 and system of claim 11, further comprising Balasubramaniam teaches causing, by the guidance circuitry, one or more changes within the environment based on the inferred emotional classification. (Col 7 lines 51-60 the contact manager 110 may use the customer state score to update a graphical user interface that is presented on the agent device 104. For example, the contact manager 110 may request or otherwise obtain customer state scores on a continuous or periodic basis during the customer contact. The contact manager 110 may generate graphical user interface data that causes a visual summary of customer state scores to be displayed on the agent device). Balasubramaniam does not specifically teach one or more changes within the physical environment based on the inferred emotional classification Sanjay teaches one or more changes within the physical environment based on the inferred emotional classification (Col 12 lines 17-30 The emotion heat map data may be mapped to and presented as an overlay on a map or planogram representation of the physical environment to allow a user (e.g., a manager or planner of the environment) to gain a better understanding of how characteristics of various locations within the physical environment may affect consumers' emotions in positive or negative ways. The users may then use the information presented in the emotion heat map to make adjustments to the characteristics of the physical environment (e.g., the positioning of furniture or products on display, the volume of music output at speakers within particular points in the environment, the types of artwork, advertising, video, or other presentations provided in certain locations Col 12 lines 32-45 an emotion heat map may be utilized to trigger one or more actuators (e.g., 115a) to automate adjustments to environmental characteristics within an environment. For instance, based on trends described in the emotion heat map, the temperature within the environment may be controlled, music selection or volume may be changed (e.g., to try to counter or otherwise influence the emotions being experienced within the environment), change the type of images or video being displayed by a display device within the environment (to affect a change or reinforce emotions being detected within a location in the environment where the display is likely being seen and affecting emotions of viewers Col 16 lines 13-20 generate human-readable notifications, or trigger other actions, which may allow a manager of the physical environment to initiate or respond (in an automated or manual manner) to address a situation evidenced by the emotion heat map (e.g., interact with the customers, adjust ambient characteristics of designated regions of the environment, etc.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included circuitry configured to one or more changes within the physical environment based on the inferred emotional classification, as disclosed by Sanjay in the system disclosed by Balasubramaniam, for the motivation of providing a method of aggregating customer emotion information embodied in emotion heat map data that can be compared and transformed into business insights analytics, which may be used to ease subsequent business decisions. (Col 15 lines 50-54 Sanjay) Regarding Claims 9 and 19, Balasubramaniam as modified by Sanjay teaches the method of claim 1 and system of claim 11, further comprising: Balasubramaniam teaches determining, by the event detection circuitry, a user identity of the user based on the received media (Col 7 lines 5-12 Information regarding the nature of the contact may also be provided to the dynamic contact management system 100. Illustratively, metadata may be associated with the communication, such as an account identifier for the customer, information identifying the item or issue about which the customer is calling, etc. In one specific, non-limiting example, the contact may be initiated as a voice call from the customer device 102); and identifying, by the event detection circuitry, a user account for the user based on the user identity, wherein (a) the user account includes one or more of user preferences, user life events, or historical user interaction events (Col 8 lines 14-25 the contact manager 110 may analyze customer state scores using a model that correlates customer state scores to recommended interactions. The model may use, as input, one or more customer state scores, agent state scores, text data representing text of customer utterances, historical customer account data regarding prior customer contacts associated with this customer, other information, some combination thereof, etc. The contact manager 110 may continuously or periodically analyze the customer state scores and other input using the model.) and (b) the recommended action is generated based on the user account. (Col 8 lines 25-30 When a recommended interaction is identified, the contact manager 110 may generate graphical user interface data that causes a representation of the recommended interaction to be displayed on the agent device 104., Col 8 lines 50-64 the contact manager 110 may analyze customer state scores using a model that correlates customer state scores to recommended evaluation questions. The model may use, as input, one or more customer state scores, agent state scores, text data representing text of customer and/or agent utterances, historical customer account data regarding prior customer contacts associated with this customer, other information, some combination thereof, etc. The contact manager 110 may determine, using the model, which evaluation questions to invite the customer to answer, or whether to invite the customer to provide an evaluation at all. ) Regarding Claim 10, Balasubramaniam as modified by Sanjay teaches the method of claim 1, Balasubramaniam teaches wherein the recommended action comprises instructions to provide one or more verbal cues, physical cues, or auditory cues to the user. (Col 8 lines 31-44 FIG. 2A, the interface 200 may include a recommended interaction section 208 that presents information representing interactions identified by the contact manager 110. For example, the recommended interaction section 208 may present a question or comment that is recommended by the contact manager 110, such as a question intended to elicit from the customer information that may help the agent in improving the experience for the customer (e.g., as represented by a subsequent customer state score). As another example, the recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent., Col 16 lines 19-30 For example, if the customer state score 402 is indicative of a particular emotional state, such as anger or despair, then the interaction manager 406 may determine that a particular interaction is to be recommended to the agent handling the customer contact, such as a statement or question designed to elicit a response from the customer that improves the customer's emotional state or otherwise improves the likelihood of a positive resolution. The recommended interaction data 420 may represent this interaction (e.g., the recommended interaction data 420 may represent a string of words to be spoken by the agent). (verbal cues) ). Claims 6, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Balasubramaniam (US 10,896,428 B1) in view of Sanjay et al. (US 10,558,862 B2) as applied to claims 1 and 11, further in view of Serna (US 10,999,435 B1). Regarding Claims 6 and 16, Balasubramaniam as modified by Sanjay teaches the method of claim 1 and system of claim 11, further comprising: Balasubramaniam teaches recommendation/guidance indicative of transferring to another agent (Col 8 lines 34-44 the recommended interaction section 208 may present a workflow or action that is recommended by the contact manager 110, such as an offer of a discount or other concession, or a transfer of the contact to another agent.(escalation); However, Balasubramaniam/Sanjay do not specifically teach determining, by the guidance circuitry and using the guidance machine learning model, an escalation event based on the inferred emotional classification for the user, wherein the emotionally intelligent interaction guidance is further indicative of the escalation event; generating, by the guidance circuitry, an escalation alert indicative of the escalation event; and providing, by the communications hardware, the escalation alert, to a second entity device different than the entity device. Serna teaches determining, by the guidance circuitry and using the guidance machine learning model, an escalation event based the inferred emotional classification for the user (Col 4 lines 32-38 if the customer is unhappy (sentiment analysis) and the customer is asking questions about financial instruments (topic analysis) it may be beneficial to route the unhappy customer to a financial advisor or portfolio manager with many years of experience. Sending an unhappy customer to a new financial advisor could further frustrate the customer. Col 8 lines 56-62 a visual interface accordingly to the embodiments simplifies analysis and enables users to more quickly address requests—which may be particularly significant when those requests include negative sentiment. Further—the visual interface can be used to automatically trigger response(s) to such detected sentiment(s) or sentiment trends. Such responses may remediate alert conditions and/or correct sentiment issues preferably simultaneously to the display of such conditions. Such responses may alternatively include augmenting positive results obtained from requests associated with positive sentiment), wherein the emotionally intelligent interaction guidance is further indicative of the escalation event (Col 9 lines 1-8 entity may be rerouted for response by the third individual, Col 14 lines 20-25 At 1112, escalation option is shown. This escalation option preferably enables a support center employee to escalate a matter to support center middle management.); generating, by the guidance circuitry, an escalation alert indicative of the escalation event (Fig 11 #1112 escalation option, Col 14 lines 20-25 At 1112, escalation option is shown. This escalation option preferably enables a support center employee to escalate a matter to support center middle management. Support center middle management may include one or more support center managers 1114 (shown as Manager A, M.A., and Manager B, M.B.).); and providing, by the communications hardware, the escalation alert, to a second entity device different than the entity device (Col 14 lines 28-35 Hierarchy 1108 visually indicates that calls may be routed from call-in devices 1102 to support request routing engine 1104. From support request routing engine 1104 calls may be routed to one of the support center employees A-C, or auto-response systems such as auto-response system 1106. In certain exceptional situations, calls may be routed from engine 1104 directly to a manager 1114 (M.B.).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included determining, by the guidance circuitry and using the guidance machine learning model, an escalation event based the inferred emotional classification for the user, wherein the emotionally intelligent interaction guidance is further indicative of the escalation event; generating, by the guidance circuitry, an escalation alert indicative of the escalation event; and providing, by the communications hardware, the escalation alert, to a second entity device different than the entity device, as disclosed by Serna in the system disclosed by Balasubramaniam/Sanjay, for the motivation of providing a method of using sentiment analysis to improve the accuracy and efficiency of responding to customer support requests. (Col 4 lines 19-22 Serna) Response to Arguments Applicant's arguments filed 2/18/26 have been fully considered but they are not persuasive. Applicant’s arguments with respect to claims 103 rejection have been. New limitations have been consideration in rejection above. Sanjay teaches circuitry configured to determine that a user has entered a physical environment Col 10 lines 16-20, 28-36 For instance, within a retail store, the locations and movement of particular consumers within the retail store environment may be detected and tracked, using sensor data 235b as inputs, such as video sensor data (and other image data), infrared (IR) sensor data, voice sensor data. These recognition techniques may be combined with localization processing to determine when a particular person enters or exits a particular area, or location, within the store, as well as how and where the particular person moves while in the store, among other example results. Accordingly, an example localization engine 250 may generate data to detect a particular person and that person's location(s) detected within a physical environment.); Regarding 101 rejection, applicant states that claims do not recite judicial exception and recite significantly more. Examiner has considered all arguments and respectfully disagrees. The claims are directed to providing emotionally intelligent interaction guidance to user which, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity (commercial activity) but for the recitation of generic computer components, the claim recites an abstract idea. Judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of receiving data, analyzing it, and providing guidance. In particular, the claims only recites the additional element – an event detection circuitry, communication hardware, an emotion analysis circuitry, an emotional intelligence machine learning model, a guidance circuitry, using a guidance machine learning model, an entity device. The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Further, the limitation of “using the emotionally intelligent/guidance machine learning model” is simply application of a computer model, itself an abstract idea. Furthermore, such applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Regarding applicant remarks on page 15-16, that claims provide a technical solution to problem faced by frontline agents, provide improvements in workplace safety and that claims provide additional elements that provide meaningful way beyond general linking to a particular technical environment. All arguments have been considered. Examiner respectfully disagrees. Applicant’s specification does not provide any further detail regarding how the claim set achieves such an improvement (workplace safety). MPEP 2106.05(a) recites “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement.” After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification. Examiner notes neither specification nor claims recite how the improvement/processor efficiency is achieved. The instant claims are directed to an abstract idea, and does not integrate the abstract idea into a practical application. The additional elements recited in the instant claims are only to generic computing components that implement the abstract idea on a computing environment. As such, it can be interpreted that the instant claims only make the abstract idea more efficient, and there are not actual changes/improvements to any computing components. The claims are wholly directed to the abstract idea. Furthermore, the system is not a specialized computing device as it merely uses generic computing components that execute instructions to perform the abstract idea. Such a device may be programmed to perform any abstract idea, and is not a particular device. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rath (US 2021/0328888) discloses providing knowledge and tools for solving customer problems to their support agents, a support organization enables these support agents to feel satisfied with their work, thus reducing churn in support agents and the need to spend the support organization's resources training many new support agents. Di Prinzio (US 11,763,239) discloses receive user streaming interaction data (USID) and determine a context of the interaction. Processed stream data is stored, and emotion information (EI) is created by analyzing the processed stream data, producing an EI-tagged interaction record. An overall first EI value (EIV) reflects a value of an EI at a first time, and an overall second EIV reflects a value of an EI at a second time. Bortis (US2024/0330942) discloses (EFD 2023-03-29) detecting presence of a customer at a geographic location associated with an institution. Sentiment of the customer can be determined prior to interaction of the customer with the institution based on analysis of a characteristic of the customer. A recommendation can be generated for interacting with the customer based on the sentiment. Deluca (US9,852459) discloses analyzing customer communications to provide better customer service including generating customer related data from communications of a customer by at least one sensing device located in a venue, transmitting the generated customer related data to an analysis engine, determining based on an analysis of the customer related data a customer experience, identifying at least one suggestion to provide better customer service including the identification of a representative of the venue that the analysis engine has determined can assist the customer, and receiving the at least one suggestion including an indication that the representative should assist the customer from the analysis engine by a computing device associated with the representative of the venue. (Fig 3) Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGEETA BAHL whose telephone number is (571)270-7779. The examiner can normally be reached 7:30 - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANGEETA BAHL/Primary Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Jul 25, 2025
Non-Final Rejection — §101, §103
Oct 27, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Oct 29, 2025
Response Filed
Nov 15, 2025
Final Rejection — §101, §103
Dec 17, 2025
Interview Requested
Feb 18, 2026
Request for Continued Examination
Feb 25, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591914
REAL-TIME COLLATERAL RECOMMENDATION
2y 5m to grant Granted Mar 31, 2026
Patent 12548099
SYSTEMS AND METHODS FOR PRIORITIZED FIRE SUPPRESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12524739
CREATING AND USING TRIPLET REPRESENTATIONS TO ASSESS SIMILARITY BETWEEN JOB DESCRIPTION DOCUMENTS
2y 5m to grant Granted Jan 13, 2026
Patent 12482304
SYSTEM AND A METHOD FOR AUTHENTICATING INFORMATION DURING A POLICE INQUIRY
2y 5m to grant Granted Nov 25, 2025
Patent 12450617
LEARNING FOR INDIVIDUAL DETECTION IN BRICK AND MORTAR STORE BASED ON SENSOR DATA AND FEEDBACK
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
40%
With Interview (+19.3%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month