Prosecution Insights
Last updated: April 19, 2026
Application No. 18/832,086

A method for detecting synthetic voice and video calls

Non-Final OA §102
Filed
Jul 22, 2024
Examiner
TESHALE, AKELAW
Art Unit
2694
Tech Center
2600 — Communications
Assignee
B. G. Negev Technologies and Applications Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
687 granted / 834 resolved
+20.4% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
867
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
35.4%
-4.6% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 834 resolved cases

Office Action

§102
DETAILED ACTION Preliminary Amendment filed on 02/27/2025 is entered. Claims 17-19,21-22,24-25 and 28-29 are cancelled. Claims 1-16,20,23 and 26-27 are pending. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-16,20,23 and 26-27 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by U.S Pub. No. 2019/0394333 A1 to Jiron et al. (hereinafter “Jiron”). Regarding claim 1, Jiron teaches a method for preventing fake calls (paragraphs [0060] and [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12), the method comprising: (a) receiving a call from a caller (Fig.3 step 100 and paragraph [0060]; receive call into contact center); (b) requesting the caller to execute a deep-fake algorithm anomality triggering (DFAAT) task (Fig. 3 step 102 and paragraph [0060]; chat bot 38 of call screening system 30 sends a random question to the user device); (c) receiving a caller related response to the DFAAT task (Fig. 3 step 104 and paragraph [0061]; receive answer to random question from user device); (d) determining, based on the caller related response, whether the call is a legitimate call or a fake call (Fig. 3 step 106 and paragraph [0061]; determines whether or not the user of the user device is human or a robot based on an analysis of the received answer); wherein the determining comprises searching for one or more deep-fake algorithm anomalies associated with the DFAAT task (paragraph [0022]; an acceptable response may be a correct answer to the random question, or merely an intelligible response that provides a wrong answer to the random question but falls into an acceptable category based on format (e.g., alpha or numeric), language, form of speech, or the like. As one example, if the random question is “What color is the sky?” then an acceptable response may be “blue,” i.e., the correct answer, or “red,” i.e., a theoretically wrong answer that falls into the acceptable category of a color name. In this example, an unintelligible response may be a numeric response, e.g., “82045,” or a response that is a different form of speech than a color name, e.g., “yes.” When the received answer is a verbal response, call screening system 18 may also perform voice analysis to determine whether the verbal response is generated by a human voice); (e) performing a fake call response when determining that the call is a fake call (Fig. 3 step 108 and paragraph [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12 (108). If the user is human); and (f) performing a legitimate call response when the determining that the call is a legitimate call (Fig. 3 step 106 and paragraph [0061]; Chat bot 38 receives an answer to the random question from the user device (104), and determines whether or not the user of the user device is human or a robot based on an analysis of the received answer). Regarding claim 2, Jiron teaches the method according to claim 1, comprising performing multiple iterations of steps (b) - (c) (paragraph [0036]; interdiction schemes applied by fraud prevention system 28 may include requesting additional authentication from the fraudulent user, accepting only the most secure forms of authentication from the fraudulent user, randomizing IVR or banking servicing prompts, or dropping/blocking the call). Regarding claim 3, Jiron teaches the method according to claim 2, wherein different iterations are associated with different DFAAT tasks (paragraph [0036]; interdiction schemes applied by fraud prevention system 28 may include requesting additional authentication from the fraudulent user, accepting only the most secure forms of authentication from the fraudulent user, randomizing IVR or banking servicing prompts, or dropping/blocking the call). Regarding claim 4, Jiron teaches the method according to claim 2, wherein at least two of the different DFAAT tasks are tailored to trigger one or more anomalies of different deep-fake Algorithms (paragraph [0036]; interdiction schemes applied by fraud prevention system 28 may include requesting additional authentication from the fraudulent user, accepting only the most secure forms of authentication from the fraudulent user, randomizing IVR or banking servicing prompts, or dropping/blocking the call). Regarding claim 5, Jiron teaches the method according to claim 2, comprising performing multiple iterations of steps (b) - (c) when finding that the call is suspected as a fake call but it not definitely a fake call (paragraph [0036]; interdiction schemes applied by fraud prevention system 28 may include requesting additional authentication from the fraudulent user, accepting only the most secure forms of authentication from the fraudulent user, randomizing IVR or banking servicing prompts, or dropping/blocking the call). Regarding claim 6, Jiron teaches the method according to claim 1, wherein the searching comprises ignoring at least one caller related response portion that is not expected to include a deep-fake algorithm anomality (paragraph [0022]; (paragraph [0022]; an acceptable response may be a correct answer to the random question, or merely an intelligible response that provides a wrong answer to the random question but falls into an acceptable category based on format (e.g., alpha or numeric), language, form of speech, or the like. As one example, if the random question is “What color is the sky?” then an acceptable response may be “blue,” i.e., the correct answer, or “red,” i.e., a theoretically wrong answer that falls into the acceptable category of a color name. In this example, an unintelligible response may be a numeric response, e.g., “82045,” or a response that is a different form of speech than a color name, e.g., “yes.” When the received answer is a verbal response, call screening system 18 may also perform voice analysis to determine whether the verbal response is generated by a human voice). Regarding claim 7, Jiron teaches the method according to claim 1, wherein the determining comprises checking a fulfillment of a realism of response constraint (paragraph [0029]; call screening system 18 may perform real-time analysis on aspects of the conversation using an AI engine to identify either a fraudulent or a neutral intent of the conversation). Regarding claim 8, Jiron teaches the method according to claim 7, comprising checking of the fulfillment of the realism constraint without using previously recorded audio or video (paragraphs [0029] and [0051]; where the answer is a verbal response, response analysis unit 52 may further perform voice analysis as part of the voice captcha to determine whether the verbal response is generated by a human voice or a recorded or synthesized voice). Regarding claim 9, Jiron teaches the method according to claim 7, wherein the determining further comprises checking a fulfillment at least one additional constraints selected out of a start of response constraint, an identity of the caller constraint, or a task constraint (paragraph [0022]; an acceptable response may be a correct answer to the random question, or merely an intelligible response that provides a wrong answer to the random question but falls into an acceptable category based on format (e.g., alpha or numeric), language, form of speech, or the like. As one example, if the random question is “What color is the sky?” then an acceptable response may be “blue,” i.e., the correct answer, or “red,” i.e., a theoretically wrong answer that falls into the acceptable category of a color name. In this example, an unintelligible response may be a numeric response, e.g., “82045,” or a response that is a different form of speech than a color name, e.g., “yes.” When the received answer is a verbal response, call screening system 18 may also perform voice analysis to determine whether the verbal response is generated by a human voice). Regarding claim 10, Jiron teaches the method according to claim 7, wherein the determining further comprises checking a fulfillment of a start of response constraint, an identity of the caller constraint and a task constraint (paragraph [0029]; call screening system 18 may perform real-time analysis on aspects of the conversation using an AI engine to identify either a fraudulent or a neutral intent of the conversation). Regarding claim 11, Jiron teaches the method according to claim 7, wherein the checking of the fulfillment of the realism constraint is executed by a realism constraint machine learning process (paragraphs [0029] and [0047]; call screening system 18 may perform real-time analysis on aspects of the conversation using an AI engine to identify either a fraudulent or a neutral intent of the conversation). Regarding claim 12, Jiron teaches the method according to claim 7, wherein the determining further comprises checking a fulfillment of an identity of the caller constraint by comparing between (i) an identity of the caller before starting the task, to (ii) an identity of the caller during the caller related response (paragraphs [0046]-[0048]; a single AI engine 48, call screening system 30 may include multiple AI engines to enable chat bot 38 to analyze and interact with voice calls into contact center 12). Regarding claim 13, Jiron teaches the method according to claim 1, wherein the determining comprises sequentially determining a fulfillment of constraints, stopping the sequentially determining and declaring the call to be a fake call upon a first non-fulfillment of one of the constraints (Fig. 3 step 106 and paragraph [0061]; Chat bot 38 receives an answer to the random question from the user device (104), and determines whether or not the user of the user device is human or a robot based on an analysis of the received answer). Regarding claim 14, Jiron teaches the method according to claim 1, wherein the fake call response comprises terminating the call and wherein a legitimate call response comprises enabling a reception of the call by an intended recipient of the call (paragraphs [0023] and [0036]; interdiction schemes applied by fraud prevention system 28 may include requesting additional authentication from the fraudulent user, accepting only the most secure forms of authentication from the fraudulent user, randomizing IVR or banking servicing prompts, or dropping/blocking the call). Regarding claim 15, Jiron teaches the method according to claim 1, wherein the fake call response comprises marking the caller as a fake call source (paragraphs [0020] and [0023]; determine whether the user of user device 16A is human or a robot). Regarding claim 16, Jiron teaches the method according to claim 1, wherein the DFAAT task comprises one or more of clearing a throat of the caller, humming a tune defined in the DFAAT task, laughing or singing a song defined in the DFAAT task (paragraph [0057]; call control unit 58 may be configured to conform to the customer's language or regional accent in an attempt to build a rapport with the customer). Regarding claim 20, Jiron teaches the method according to claim 1, wherein the DFAAT task comprises one or more of turning around in a manner defined in the DFAAT task, interacting with an object in a manner defined in the DFAAT task, or contacting a body part in a manner defined in the DFAAT task (paragraph [0057]; call control unit 58 may be configured to conform to the customer's language or regional accent in an attempt to build a rapport with the customer). Regarding claim 23, Jiron teaches the method according to claim 1, wherein the call is selected out of an audio call, an audio-visual call or a visual call (paragraphs [0021] and [0049]; random question unit 50 of voice captcha unit 42 may select the random question to be sent to the user device from a plurality of questions using a random number generator. In some cases, the plurality of questions may be stored locally in storage units 36 of call screening system). Regarding claim 26, Jiron teaches a non-transitory computer readable medium for preventing fake calls (paragraphs [0060] and [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12), the non-transitory computer readable medium stores instructions (paragraph [0044]; storage units 36 may include a computer-readable storage medium or computer-readable storage device) that once executed by a computerized unit to: (a) receive a call from a caller (Fig.3 step 100 and paragraph [0060]; receive call into contact center); (b) request the caller to execute a deep-fake algorithm anomality triggering (DFAAT) task (Fig. 3 step 102 and paragraph [0060]; chat bot 38 of call screening system 30 sends a random question to the user device); (c) receive a caller related response to the DFAAT task (Fig. 3 step 104 and paragraph [0061]; receive answer to random question from user device); (d) determine, based on the caller related response, whether the call is a legitimate call or a fake call (Fig. 3 step 106 and paragraph [0061]; determines whether or not the user of the user device is human or a robot based on an analysis of the received answer); wherein a determining comprises searching for one or more deep-fake algorithm anomalies associated with the DFAAT task (paragraph [0022]; an acceptable response may be a correct answer to the random question, or merely an intelligible response that provides a wrong answer to the random question but falls into an acceptable category based on format (e.g., alpha or numeric), language, form of speech, or the like. As one example, if the random question is “What color is the sky?” then an acceptable response may be “blue,” i.e., the correct answer, or “red,” i.e., a theoretically wrong answer that falls into the acceptable category of a color name. In this example, an unintelligible response may be a numeric response, e.g., “82045,” or a response that is a different form of speech than a color name, e.g., “yes.” When the received answer is a verbal response, call screening system 18 may also perform voice analysis to determine whether the verbal response is generated by a human voice); (g) perform a fake call response when determining that the call is a fake call (Fig. 3 step 108 and paragraph [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12 (108). If the user is human); and (h) perform a legitimate call response when the determining that the call is a legitimate call (Fig. 3 step 106 and paragraph [0061]; Chat bot 38 receives an answer to the random question from the user device (104), and determines whether or not the user of the user device is human or a robot based on an analysis of the received answer). Regarding claim 27, Jiron teaches a computerized system for preventing fake calls (paragraphs [0060] and [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12), the computerized system comprises: (a) an input unit that is configured to receive a call from a caller (Fig.3 step 100 and paragraph [0060]; receive call into contact center); (b) an output unit that is configured to request the caller to execute a deep-fake algorithm anomality triggering (DFAAT) task (Fig. 3 step 102 and paragraph [0060]; chat bot 38 of call screening system 30 sends a random question to the user device); (c) a processing unit that is configured to determine, based on a caller related response that is received by the input unit, whether the call is a legitimate call or a fake call (Fig. 3 step 106 and paragraph [0061]; determines whether or not the user of the user device is human or a robot based on an analysis of the received answer); wherein the determining comprises searching for one or more deep-fake algorithm anomalies associated with the DFAAT task (paragraph [0022]; an acceptable response may be a correct answer to the random question, or merely an intelligible response that provides a wrong answer to the random question but falls into an acceptable category based on format (e.g., alpha or numeric), language, form of speech, or the like. As one example, if the random question is “What color is the sky?” then an acceptable response may be “blue,” i.e., the correct answer, or “red,” i.e., a theoretically wrong answer that falls into the acceptable category of a color name. In this example, an unintelligible response may be a numeric response, e.g., “82045,” or a response that is a different form of speech than a color name, e.g., “yes.” When the received answer is a verbal response, call screening system 18 may also perform voice analysis to determine whether the verbal response is generated by a human voice); and (d) a response unit that is configured to: a. perform a fake call response when determining that the call is a fake call (Fig. 3 step 108 and paragraph [0062]; chat bot 38 sends the call to a fraud prevention system in the contact center, e.g., fraud prevention system 28 of contact center 12 (108). If the user is human); and b. perform a legitimate call response when determining that the call is a legitimate call (Fig. 3 step 106 and paragraph [0061]; Chat bot 38 receives an answer to the random question from the user device (104), and determines whether or not the user of the user device is human or a robot based on an analysis of the received answer). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKELAW A TESHALE whose telephone number is (571)270-5302. The examiner can normally be reached 9 am -6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAN TSANG can be reached at (571) 272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. AKELAW TESHALE Primary Examiner Art Unit 2694 /AKELAW TESHALE/Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598261
WIDEBAND DOUBLETALK DETECTION FOR OPTIMIZATION OF ACOUSTIC ECHO CANCELLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12598253
SYSTEMS AND METHODS FOR MEDIA ANALYSIS FOR CALL STATE DETECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12589700
HOLDING APPARATUS AND METHOD FOR HOLDING A MOBILE DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12574665
DATA PROCESSING METHOD, OUTDOOR UNIT, INDOOR UNIT AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12563346
FLEXIBLE ELECTRONIC DEVICE AND METHOD FOR ADJUSTING SOUND OUTPUT THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 834 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month