Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,896

SYSTEMS AND METHODS FOR PREDICTING AND PREVENTING SOCIAL ENGINEERING SCAMS IN REAL TIME

Non-Final OA §103
Filed
Sep 07, 2023
Examiner
RUSS, COREY V
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jpmorgan Chase Bank N A
OA Round
3 (Non-Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
44 granted / 166 resolved
-25.5% vs TC avg
Strong +41% interview lift
Without
With
+40.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
38 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
43.5%
+3.5% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
4.5%
-35.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The following is a non-final office action. Claims [1-3, 5-7, 10, 12, 14-15, and 17-20] are currently pending and have been examined based on their merit. Claims 1, 10, 14, and 20 are newly amended see REMARKS December 22, 2025. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 30, 2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-7, 14-15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soryal (US 2022/0294899) in view of Pascual (US 2025/0111046) further in view of Rodriguez Bravo (US 2025/0028855). Claims 1 and 14: Soryal discloses (claim 1) a method for predicting social engineering scams in real time, comprising: (Claim 14) A non-transitory computer readable storage medium, including instructions stored thereon, which when read and executed by one or more computer processors, cause the one or more computer processors to perform steps comprising: monitoring, by a computer program executed by a user electronic device for a user, an application executed by the user electronic device (Paragraph [0004-0005]; [0022]; [0053]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. In one example, each of the user endpoint devices may comprise any single device or combination of devices that may comprise a user endpoint device. For example, the user endpoint device may comprise a smart device, an application server, and the like. The method may be able to monitor a user’s interactions with other parties for events that may signify the possible disclosure of sensitive user data); identifying, by the computer program, a communication from a second electronic device received by the application (Paragraph [0004-0005]; [0022]; [0053]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. In one example, each of the user endpoint devices may comprise any single device or combination of devices that may comprise a user endpoint device. For example, the user endpoint device may comprise a smart device, an application server, and the like. The method may be able to monitor a user’s interactions with other parties for events that may signify the possible disclosure of sensitive user data); extracting, by the computer program and using a machine learning engine that is trained with historical data to predict scams, a pattern from the communication (Paragraph [0004-0005]; [0057]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. The processing system may may flag any questions that seem unusually intrusive (e.g. based on a comparison to questions asked in other similar calls with the same or other users, which may be analyzed using machine learning techniques to learn what types of questions are to be expected)); comparing, by the computer program, the pattern and the sentiment to scam patterns in a local scam database (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user); generating, by the computer program, an alert in response to the pattern and sentiment matching one of the scam patterns (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user); causing, by the computer program, the user electronic device to vibrate in response to the alert (Paragraph [0004-0005]; [0049]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. The alert may comprise a visual alert, an audible alert, or a tactile alert (i.e. a vibration or rumble)). Soryal discloses a system of protecting a user data from potential risks by monitoring communications between a user and another party. However, Soryal does not disclose the following claim limitations: identifying, by the computer program and using the machine learning engine, a sentiment of content of the communication; notifying, by the computer program, a backend for the computer program of the alert; and locking, by the backend, an account associated with the user so that the account cannot be accessed. In the same field of endeavor of detecting potential fraud in a communication Pascual teaches identifying, by the computer program and using the machine learning engine, a sentiment of content of the communication (Paragraph [0008]; [0054-0055]; [0064]; [0069] a system for scam detection and prevention is disclosed. The system includes one or more processors that when executed cause the system to perform operations comprising: receiving a communication comprising communication information; parsing the received communication information to extract attributes of the received communication information; performing a series of deterministic checks on the attributes; performing a series of probabilistic analysis on the attributes, wherein the probabilistic analysis comprise using machine learning models; aggregating the results to generate a scam risk score; and presenting the scam risk score and the recommendation to a user. Scam detector subsystem may include a scam monitor or module. Data is analyzed and compared with a database of historical communications in the data warehouse to determine if the user is corresponding with a scammer. Natural language processing may be used where scam detector subsystem may be trained with known scam communications and used to detect conversations showing malicious intent within a channel. Using natural language processing and machine learning techniques scam detector subsystem identifies deceptive language, fraudulent offers, or other hidden red flags. Each component of a communication may be analyzed such that the component is compared with internal and external databases, or otherwise attempts are made to detect anomalies to establish legitimacy or malicious intent. Analysis and categorization of message may be implemented using natural language processing.). Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of detecting potential scam attempts during a communication attempt by monitoring the communication and comparing the communication data with historical data using a machine learning model as disclosed by Soryal (Soryal [0004]) with the system of identifying, by the computer program and using the machine learning engine, a sentiment of content of the communication as taught by Pascual (Pascual [0055]). With the motivation of helping to prevent scam attempts (Pascual [0005]). In the same field of endeavor of detecting and preventing fraud during a communication Rodriguez Bravo teaches notifying, by the computer program, a backend for the computer program of the alert (Paragraph [0003]; [0031]; [0041]; [0050]; Fig. 3, embodiments of the present invention are directed to computer-implemented methods for preventing scams in real time in an interactive communication environment. A method includes determining, using a machine learning model, that at least one communication from a first user to a second user in an interactive communication environment includes a potential threat. The method includes determining that the potential threat is above a threshold. The software application can access the registration and authentication software in order to control access of the user accounts in user profiles for users. The software applications can perform actions and/or cause any actions to be performed for any of the user accounts in the user profiles when the computer determines a potential scam or attack directed to a user. In response to a potential threat one or more software applications of the computer system are configured to perform one or more actions to prevent, block, and/or stop the potential threat from occurring. In accordance with one or more embodiments, the software application can cause many temporary security actions to be performed to block, disable, etc. one or more functions of the potential victim in the interactive communication environment. The one or more functions can be blocked for a predetermined period of time); and locking, by the backend, an account associated with the user so that the account cannot be accessed (Paragraph [0003]; [0031]; [0041]; [0050]; Fig. 3, embodiments of the present invention are directed to computer-implemented methods for preventing scams in real time in an interactive communication environment. A method includes determining, using a machine learning model, that at least one communication from a first user to a second user in an interactive communication environment includes a potential threat. The method includes determining that the potential threat is above a threshold. The software application can access the registration and authentication software in order to control access of the user accounts in user profiles for users. The software applications can perform actions and/or cause any actions to be performed for any of the user accounts in the user profiles when the computer determines a potential scam or attack directed to a user. In response to a potential threat one or more software applications of the computer system are configured to perform one or more actions to prevent, block, and/or stop the potential threat from occurring. In accordance with one or more embodiments, the software application can cause many temporary security actions to be performed to block, disable, etc. one or more functions of the potential victim in the interactive communication environment. The one or more functions can be blocked for a predetermined period of time). Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of detecting potential scam attempts during a communication attempt by monitoring the communication and comparing the communication data with historical data using a machine learning model as disclosed by Soryal (Soryal [0004]) with the system of notifying, by the computer program, a backend for the computer program of the alert; and locking, by the backend, an account associated with the user so that the account cannot be accessed as taught by Rodriguez Bravo (Rodriguez Bravo [0050]). With the motivation of helping to prevent and stop potential threats to a user account (Rodriguez Bravo [0003]). Claim 2: Modified Soryal discloses the method as per claim 1. Soryal further discloses wherein the communication comprises a voice communication or a text communication (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user). Claims 3 and 15: Modified Soryal discloses the method as per claim 2 and the non-transitory computer readable storage medium as per claim 14. Soryal further discloses further comprising: (Claim 15) further comprising instructions stored thereon, which when read and executed by one or more computer processors, generating, by the computer program, a transcription of the voice communication (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user). Claims 6 and 18: Modified Soryal discloses the method as per claim 1 and the non-transitory computer readable storage medium as per claim 14. Soryal further discloses further comprising: determining, by the computer program, a risk score for the user; wherein the computer program generates the alert in response to the pattern and the sentiment matching one of the scam patterns and the risk score being above a threshold (Paragraph [0004-0005]; [0028]; [0045]; [0057]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. The processing system may may flag any questions that seem unusually intrusive (e.g. based on a comparison to questions asked in other similar calls with the same or other users, which may be analyzed using machine learning techniques to learn what types of questions are to be expected). In an example the interaction may be considered likely to put the sensitive data of the user at risk if when a level of match between the string of text and an entry in the library of known interactions at least meets a threshold, plus some other risk factors are present. The other risk factors may include extrinsic data about the interaction (whereas the context of the audio interaction, e.g. the utterances spoken, may comprise intrinsic data about the interactions). The other risk factor may comprise, for instance, an inability of the other party to respond or to respond satisfactorily to a challenge. The extrinsic data may include, for example, the time of day at which the call is received, the phone number from which the call is received, the company the caller alleged to be involved with, and/or other data). Claims 7 and 19: Modified Soryal discloses the method as per claim 6 and the non-transitory computer readable storage medium as per claim 18. Soryal further discloses wherein the risk score is based on demographics of the user, a time of year, and a type of transaction involved in the communication (Paragraph [0004-0005]; [0028]; [0041]; [0045]; [0057]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. The processing system may may flag any questions that seem unusually intrusive (e.g. based on a comparison to questions asked in other similar calls with the same or other users, which may be analyzed using machine learning techniques to learn what types of questions are to be expected). In an example the interaction may be considered likely to put the sensitive data of the user at risk if when a level of match between the string of text and an entry in the library of known interactions at least meets a threshold, plus some other risk factors are present. The other risk factors may include extrinsic data about the interaction (whereas the context of the audio interaction, e.g. the utterances spoken, may comprise intrinsic data about the interactions). The other risk factor may comprise, for instance, an inability of the other party to respond or to respond satisfactorily to a challenge. The extrinsic data may include, for example, the time of day at which the call is received, the phone number from which the call is received, the company the caller alleged to be involved with, and/or other data). Claims 5 and 17: Modified Soryal discloses the method as per claim 1 and the non-transitory computer readable storage medium as per claim 14. However, Soryal does not disclose wherein the computer program compares the pattern to the scam patterns using vector distance matching. In the same field of endeavor of detecting potential fraud in a communication Pascual teaches wherein the computer program compares the pattern to the scam patterns using vector distance matching (Paragraph [0008]; [0054-0055]; [0064]; [0069] a system for scam detection and prevention is disclosed. The system includes one or more processors that when executed cause the system to perform operations comprising: receiving a communication comprising communication information; parsing the received communication information to extract attributes of the received communication information; performing a series of deterministic checks on the attributes; performing a series of probabilistic analysis on the attributes, wherein the probabilistic analysis comprise using machine learning models; aggregating the results to generate a scam risk score; and presenting the scam risk score and the recommendation to a user. Scam detector subsystem may include a scam monitor or module. Data is analyzed and compared with a database of historical communications in the data warehouse to determine if the user is corresponding with a scammer. Natural language processing may be used where scam detector subsystem may be trained with known scam communications and used to detect conversations showing malicious intent within a channel. Using natural language processing and machine learning techniques scam detector subsystem identifies deceptive language, fraudulent offers, or other hidden red flags. Each component of a communication may be analyzed such that the component is compared with internal and external databases, or otherwise attempts are made to detect anomalies to establish legitimacy or malicious intent. Analysis and categorization of message may be implemented using natural language processing.). Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of detecting potential scam attempts during a communication attempt by monitoring the communication and comparing the communication data with historical data using a machine learning model as disclosed by Soryal (Soryal [0004]) with the system of wherein the computer program compares the pattern to the scam patterns using vector distance matching as taught by Pascual (Pascual [0055]). With the motivation of helping to prevent scam attempts (Pascual [0005]). Claim 20: Modified Soryal discloses the method as per claim 1. Soryal further discloses causing, by the computer program, the user electronic device to vibrate in response to the alert (Paragraph [0004-0005]; [0049]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user. The alert may comprise a visual alert, an audible alert, or a tactile alert (i.e. a vibration or rumble)). Claim(s) 10 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soryal (US 2022/0294899) in view of Majdabadi (US 2023/0262159) further in view of Rodriguez Bravo (US 2025/0028855). Claim 10: Soryal discloses a method for predicting social engineering scams in real time, comprising: monitoring, by a computer program executed by an agent electronic device for an agent, a voice communication with a user from a user electronic device (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user); and generating, by the computer program, an alert in response to the determination (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user). Soryal discloses a method of monitoring a user’s communications to determine potential fraud. However, Soryal does not disclose the following claim limitations: extracting, by the computer program, voice elements for the user from the voice communication; retrieving, by the computer program and from a voice signature database, a voice signature for the user; simulating, by the computer program, the voice of the user under duress using the voice signature; comparing, by the computer program, the simulated voice to the voice communication; determining, by the computer program and from the comparison, that the voice elements indicate duress; notifying, by the computer program, a backend for the computer program of the alert; and locking, by the backend, an account associated with the user so that the account cannot be accessed. In the same field of endeavor of detecting and preventing fraud during a communication Majdabadi teaches extracting, by the computer program, voice elements for the user from the voice communication (Paragraph [0010-0011]; [0020]; [0023]; aspects of the present invention recognize that additional security measures in place between unidentified callers and customers of companies or businesses would significantly decrease the likelihood of customers falling victim to exploitative acts. Enhanced protection may be achieved by detecting an anomaly in a voice communication, determine if an exploitation attempt is ongoing, and if so, alert the customer engaged on the voice call and/or a third-party (e.g. financial institution) to take some action to prevent the customer from making a decision under duress. Aspects of the present invention prove computer-implemented methods configured for establishing a baseline of scope. For example, an applications cope may include normal health data. Aspects of the invention provide methods configured to process training data (e.g. voice call data, user data, communication data) to train machine learning models to identify call types (e.g. solicitation) and user stress levels (e.g. normal, baseline, elevated, heightened). Further, additional data may be gathered and added to the training data to continuously improve the learning algorithms and re-train the trained models to better estimate thresholds to determine when a user stress level exceeded the threshold. Therefore, the predictive measure of the trained model may be configured to trigger an alert or notification to augment the user’s critical decision-making process); retrieving, by the computer program and from a voice signature database, a voice signature for the user (Paragraph [0010-0011]; [0020]; [0023]; aspects of the present invention recognize that additional security measures in place between unidentified callers and customers of companies or businesses would significantly decrease the likelihood of customers falling victim to exploitative acts. Enhanced protection may be achieved by detecting an anomaly in a voice communication, determine if an exploitation attempt is ongoing, and if so, alert the customer engaged on the voice call and/or a third-party (e.g. financial institution) to take some action to prevent the customer from making a decision under duress. Aspects of the present invention prove computer-implemented methods configured for establishing a baseline of scope. For example, an applications cope may include normal health data. Aspects of the invention provide methods configured to process training data (e.g. voice call data, user data, communication data) to train machine learning models to identify call types (e.g. solicitation) and user stress levels (e.g. normal, baseline, elevated, heightened). Further, additional data may be gathered and added to the training data to continuously improve the learning algorithms and re-train the trained models to better estimate thresholds to determine when a user stress level exceeded the threshold. Therefore, the predictive measure of the trained model may be configured to trigger an alert or notification to augment the user’s critical decision-making process); generating, by the computer program, a simulated voice of the user under duress using the voice signature (Paragraph [0010-0011]; [0020-0021]; [0023]; aspects of the present invention recognize that additional security measures in place between unidentified callers and customers of companies or businesses would significantly decrease the likelihood of customers falling victim to exploitative acts. Enhanced protection may be achieved by detecting an anomaly in a voice communication, determine if an exploitation attempt is ongoing, and if so, alert the customer engaged on the voice call and/or a third-party (e.g. financial institution) to take some action to prevent the customer from making a decision under duress. Aspects of the present invention prove computer-implemented methods configured for establishing a baseline of scope. For example, an applications cope may include normal health data. Aspects of the invention provide methods configured to process training data (e.g. voice call data, user data, communication data) to train machine learning models to identify call types (e.g. solicitation) and user stress levels (e.g. normal, baseline, elevated, heightened). Further, additional data may be gathered and added to the training data to continuously improve the learning algorithms and re-train the trained models to better estimate thresholds to determine when a user stress level exceeded the threshold. Therefore, the predictive measure of the trained model may be configured to trigger an alert or notification to augment the user’s critical decision-making process. Aspect of the present invention provide computer implemented methods for calculating real-time input data to compare to baseline health data and attributes to determine if user stress levels have exceeded a threshold); comparing, by the computer program, the simulated voice to the voice communication (Paragraph [0010-0011]; [0020-0021]; [0023]; aspects of the present invention recognize that additional security measures in place between unidentified callers and customers of companies or businesses would significantly decrease the likelihood of customers falling victim to exploitative acts. Enhanced protection may be achieved by detecting an anomaly in a voice communication, determine if an exploitation attempt is ongoing, and if so, alert the customer engaged on the voice call and/or a third-party (e.g. financial institution) to take some action to prevent the customer from making a decision under duress. Aspects of the present invention prove computer-implemented methods configured for establishing a baseline of scope. For example, an applications cope may include normal health data. Aspects of the invention provide methods configured to process training data (e.g. voice call data, user data, communication data) to train machine learning models to identify call types (e.g. solicitation) and user stress levels (e.g. normal, baseline, elevated, heightened). Further, additional data may be gathered and added to the training data to continuously improve the learning algorithms and re-train the trained models to better estimate thresholds to determine when a user stress level exceeded the threshold. Therefore, the predictive measure of the trained model may be configured to trigger an alert or notification to augment the user’s critical decision-making process. Aspect of the present invention provide computer implemented methods for calculating real-time input data to compare to baseline health data and attributes to determine if user stress levels have exceeded a threshold); determining, by the computer program and from the comparison, that the voice elements indicate duress (Paragraph [0010-0011]; [0020]; [0023]; aspects of the present invention recognize that additional security measures in place between unidentified callers and customers of companies or businesses would significantly decrease the likelihood of customers falling victim to exploitative acts. Enhanced protection may be achieved by detecting an anomaly in a voice communication, determine if an exploitation attempt is ongoing, and if so, alert the customer engaged on the voice call and/or a third-party (e.g. financial institution) to take some action to prevent the customer from making a decision under duress. Aspects of the present invention prove computer-implemented methods configured for establishing a baseline of scope. For example, an applications cope may include normal health data. Aspects of the invention provide methods configured to process training data (e.g. voice call data, user data, communication data) to train machine learning models to identify call types (e.g. solicitation) and user stress levels (e.g. normal, baseline, elevated, heightened). Further, additional data may be gathered and added to the training data to continuously improve the learning algorithms and re-train the trained models to better estimate thresholds to determine when a user stress level exceeded the threshold. Therefore, the predictive measure of the trained model may be configured to trigger an alert or notification to augment the user’s critical decision-making process). Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of detecting potential scam attempts during a communication attempt by monitoring the communication and comparing the communication data with historical data using a machine learning model as disclosed by Soryal (Soryal [0004]) with the system of extracting, by the computer program, voice elements for the user from the voice communication; retrieving, by the computer program, a voice signature for the user; determining, by the computer program, that the voice elements indicate duress by comparing the voice elements to the voice signature as taught by Majdabadi (Majdabadi [0025]). With the motivation of helping to prevent vulnerable targets from falling for scams (Majdabadi [0002]). In the same field of endeavor of detecting and preventing fraud during a communication Rodriguez Bravo teaches notifying, by the computer program, a backend for the computer program of the alert (Paragraph [0003]; [0031]; [0041]; [0050]; Fig. 3, embodiments of the present invention are directed to computer-implemented methods for preventing scams in real time in an interactive communication environment. A method includes determining, using a machine learning model, that at least one communication from a first user to a second user in an interactive communication environment includes a potential threat. The method includes determining that the potential threat is above a threshold. The software application can access the registration and authentication software in order to control access of the user accounts in user profiles for users. The software applications can perform actions and/or cause any actions to be performed for any of the user accounts in the user profiles when the computer determines a potential scam or attack directed to a user. In response to a potential threat one or more software applications of the computer system are configured to perform one or more actions to prevent, block, and/or stop the potential threat from occurring. In accordance with one or more embodiments, the software application can cause many temporary security actions to be performed to block, disable, etc. one or more functions of the potential victim in the interactive communication environment. The one or more functions can be blocked for a predetermined period of time); and locking, by the backend, an account associated with the user so that the account cannot be accessed (Paragraph [0003]; [0031]; [0041]; [0050]; Fig. 3, embodiments of the present invention are directed to computer-implemented methods for preventing scams in real time in an interactive communication environment. A method includes determining, using a machine learning model, that at least one communication from a first user to a second user in an interactive communication environment includes a potential threat. The method includes determining that the potential threat is above a threshold. The software application can access the registration and authentication software in order to control access of the user accounts in user profiles for users. The software applications can perform actions and/or cause any actions to be performed for any of the user accounts in the user profiles when the computer determines a potential scam or attack directed to a user. In response to a potential threat one or more software applications of the computer system are configured to perform one or more actions to prevent, block, and/or stop the potential threat from occurring. In accordance with one or more embodiments, the software application can cause many temporary security actions to be performed to block, disable, etc. one or more functions of the potential victim in the interactive communication environment. The one or more functions can be blocked for a predetermined period of time). Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of detecting potential scam attempts during a communication attempt by monitoring the communication and comparing the communication data with historical data using a machine learning model as disclosed by modified Soryal (Soryal [0004]) with the system of notifying, by the computer program, a backend for the computer program of the alert; and locking, by the backend, an account associated with the user so that the account cannot be accessed as taught by Rodriguez Bravo (Rodriguez Bravo [0050]). With the motivation of helping to prevent and stop potential threats to a user account (Rodriguez Bravo [0003]). Claim 12: Modified Soryal discloses the method as per claim 10. Soryal further discloses wherein the computer program communicates an alert to the user electronic device (Paragraph [0004-0005]; Fig. 1, systems for protecting user data during audio interactions. In an example a method performed by the processing system includes detecting an audio signal that is part on an interaction between a user and another party, converting the audio signal into a string of text, detecting that the interaction is likely to put sensitive data of the user at risk, based on a comparison of the string of text to a library of interactions that are known to put sensitive data at risk, and sending in response to detecting that the interaction is likely to put the sensitive data of the user at risk an alert to notify the user). Therefore, claims 1-3, 5-7, 10, 12, 14-15, and 17-20 are rejected under 35 U.S.C. 103. Response to Arguments Applicant’s arguments, see REMARKS, filed December 22, 2025, with respect to the rejections of Claims 1-3, 5-7, 14-15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soryal (US 2022/0294899) in view of Pascual (US 2025/0111046) further in view of Rodriguez Bravo (US 2025/0028855) and Claim(s) 10 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soryal (US 2022/0294899) in view of Majdabadi (US 2023/0262159) further in view of Rodriguez Bravo (US 2025/0028855) are not persuasive as the claims were amended which required further search and consideration and new art was applied. Claims 1 and 14: The representative argues that the current combination of prior art does not disclose the newly amended claim limitations “identifying, by the computer program and using the machine learning engine, a sentiment of content of the communication.” Upon an initial review the examiner finds that Soryal (US 2022/0294899) discloses a system of monitoring and protecting user data during audio interactions. Soryal discloses detecting an interaction between a user and another party and converting the audio signal into a string of text to detect if the interaction would put sensitive user data at risk (Soryal [0004]). To perform this Soryal discloses a system of sending keywords extracted from the audio interaction to an endpoint device to by analyzed for the likelihood of risk (Soryal [0025-0026]). Soryal can be used in combination with Pascual which teaches a system of automatic scam detection in user communications. Pascual teaches a system of receiving user communications and using a plurality of techniques such as machine learning and natural language processing to compare communication attributes to historical information to generate a risk score (Pascual [0008]). The analysis taught by Pascual includes performing natural language processing on the content of a communication to determine the intent and disposition of a communication such as malicious or “good or bad” (Pascual [0055]; [0059]). The examiner notes that the broadest reasonable interpretation of performing sentiment analysis on communications would include using natural language processing to determine the intent and disposition of a communication when determine potential risk score for a possible scam. Therefore, the examiner finds that the combination of Soryal and Pascual teach a system for receiving and extracting user information such as interaction information and performing analysis such as sentiment analysis on the user information to determine potentially fraudulent activity. Claims 10 and 12: The representative further argues that the current combination of prior art does not disclose the claim limitation of claims 10 and 12. Upon an initial review the examiner finds that Soryal (US 2022/0294899) discloses a system of monitoring and protecting user data during audio interactions. Soryal discloses detecting an interaction between a user and another party and converting the audio signal into a string of text to detect if the interaction would put sensitive user data at risk (Soryal [0004]). To perform this Soryal discloses a system of sending keywords extracted from the audio interaction to an endpoint device to by analyzed for the likelihood of risk (Soryal [0025-0026]). Soryal is used in combination with Majdabadi (US 2023/0262159) which teaches a system of detecting an anomaly during a voice communication between a user and a third party (Majdabadi [0011]). To accomplish this Majdabadi teaches training a machine learning model to understand a user’s voice under various stress levels (e.g. normal, baseline, elevated, heightened, etc.) and comparing the current interaction data with baseline data to determine if a user is under stress (Majdabadi [0021]). Majdabadi further teaches sending out an alert when a user’s stress levels are above a threshold to help a user’s critical decision-making process (Majdabadi [0023]). Therefore, the examiner finds that the current combination of prior art teaches the amended claim limitations of claims 10 and 12. Therefore, the examiner finds that the combination of Soryal, Majdabadi, and Rodriguez Bravo teach the claim limitations. Therefore, claim 1, 10, and 14 are newly rejected under U.S.C. 103. Claims 2-3, 5-7, 12, 15, and 17-20 were argued as being allowable only as being dependent on claims 1, 10, and 14. Therefore, they are also allowable over U.S.C. 103. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Abrol (US 2017/0206557) Real-time stream data information integration and analytics system. Stolarz (US 2022/0046053) System and method for omnichannel social engineering attack avoidance. Himler (US 9774626) Method and system for assessing and classifying reported potentially malicious messages in a cybersecurity system. Austraat (US 2024/0232765) Audio signal processing and dynamic natural language understanding. Moturi (US 2024/0320443) Real-time user communication sentiment detection for dynamic anomaly detection and mitigation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY RUSS whose telephone number is (571)270-5902. The examiner can normally be reached on M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 5712726782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COREY RUSS/Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
May 03, 2025
Non-Final Rejection — §103
Jul 31, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103
Dec 22, 2025
Response after Non-Final Action
Jan 30, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596993
METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR MANAGING FEATURE PRELOAD DATA OBJECT PROCESSING OPERATIONS IN A CARD-BASED COLLABORATIVE WORKFLOW MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579515
SYSTEMS AND METHODS TO TRAIN AND/OR USE A MACHINE LEARNING MODEL TO GENERATE CORRESPONDENCES BETWEEN PORTIONS OF RECORDED AUDIO CONTENT AND WORK UNIT RECORDS OF A COLLABORATION ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12555077
EVALUATION ADJUSTMENT FACTORING FOR BIAS
2y 5m to grant Granted Feb 17, 2026
Patent 12499501
SYSTEM AND METHOD FOR CALLER VERIFICATION
2y 5m to grant Granted Dec 16, 2025
Patent 12469097
SYSTEMS AND METHODS FOR ELECTRONIC SIGNATURE TRACKING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
67%
With Interview (+40.9%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month