Prosecution Insights
Last updated: April 19, 2026
Application No. 18/147,317

NETWORK BASED SCAM RESPONSE

Final Rejection §103
Filed
Dec 28, 2022
Examiner
WINDER, PATRICE L
Art Unit
2453
Tech Center
2400 — Computer Networks
Assignee
T-Mobile Innovations LLC
OA Round
4 (Final)
87%
Grant Probability
Favorable
5-6
OA Rounds
3y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
550 granted / 632 resolved
+29.0% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
26 currently pending
Career history
658
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 632 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9,11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leddy et al. US 20200067861 A1 (hereafter referred to as Leddy) in view of Quilici et al., WO 2019055596 A1 (hereafter referred to Quilici), further in view of Bahrs et al., US 20190149575 A1 (hereafter referred to as Bahrs). Claim 17, Leddy teaches a system for managing a network based scam response, the system comprising: one or more processors (p. 54, “the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.”); and one or more computer storage hardware devices storing computer-usable instructions that, when used by the one or more processors (p. 54, “a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.”), cause the one or more processors to: at a scam response agent in a network, intercept a scam communication request intended for a user device (p. 233, “the message is obtained from a nefarious/malicious person/user (e.g., scammer), who, for example, is communicating with a honeypot established by a scam evaluation system.” And p. 343, “In one embodiment, the communication channel is email. Another example of a communication channel is short message service (SMS).” See also p. 343, “the Analytics Engine described above is configured to wait for new messages to arrive on a variety of communication channels including, but not limited to, email, SMS, or voice message.” Indication also trigger by phone calls, see p. 1764, “Other communication technologies can also be monitored and filtered, as applicable. For example, automated voice recognition techniques can be used in conjunction with the screening of voicemail messages (e.g., in conjunction with a service such as Google Voice) or calls …”); in response to the scam communication request, communicate to a scam device, by the network, a first response (p. 825, “the honeypot account can respond to messages using a contextual response generator such as the Eliza responder, which provides (generic) responses that are relevant to received messages. This allows for automated emulation of a potential victim.” And p. 829, “Examples of stages include introduction, increasing familiarity, encountering a problem, asking for assistance, etc.”); receive a scam communication from the scam device (p. 825, “The honeypot account can be used to provide responses to attackers to encourage receiving future spam and scam messages.” The conversation is ongoing according to “which stage in the conversation”. And p. 829, “Messages can be classified based on which stage in the conversation thread/interplay between the attacker and the honeypot account the messages were received at.”); analyze the scam communication to determine a scam type from a plurality of predefined scam types, wherein the scam type is based on content of the scam communication (p. 826, “messages are forwarded from the honeypot account to reader 303. For example, the reader extracts messages from the honeypot account. The reader is then configured to send the extracted messages to type classifier 305.” And p. 827, “Type classifier 305 is configured to classify messages. The classification can be performed based on a variety of attributes and indications associated with keywords in a message, the context of the message, etc. For example, if the content of the messages described the theme of love, then this is indicative of a romance scam.”); select a second response from a plurality of dynamically generated responses (p. 829, “Stage classifier 306 is configured to classify messages based on the stage of a conversation. For example, the honeypot account can be used to maintain back and forth communications with an attacker, forming a thread of conversation (e.g., multi-message scams).”), wherein the second response is selected based on the determined scam type and the content of the scam communication (p. 832, “appropriate response(s) to messages are determined by the customize response 309 based on the results of the type classifier. For example, suppose that a message has been classified as a romance scam by the type classifier, the customize response selects a response from a set of romance responses (e.g., at random, based on a relevance match between the response and context/content of the message, etc.), which is then provided back to the attacker.”); determine a scam type based on the scam communication (p. 830, “Match selector 307 is configured to obtain one or more messages in a series of messages and determine to what extent the obtained messages are scam messages. The messages are then communicated to repository 308.”); based on the scam type and the scam communication, determine a second response; and communicate the second response to the scam device (p. 831, “Customize response 309 is configured to generate responses to messages. In some embodiments, the customize response 309 is implemented using a contextual response generator such as Eliza, as described above. For example, based on an analysis of collected messages and their content/context, corresponding responses are created. Responses can also be selected from a set of candidate responses.” And p. 833, “customize response 309 is implemented using a script that is connected to a database (e.g., that includes candidate responses segmented by classification).”). Leddy does not specifically recite the response for a call but indicates usefulness in communicating using Google voice. Quilici teaches a call answering bot responding (p. 43, “A call answering bot may comprise a chatbot (also known as a talkbot, chatterbot, chatterbox, instant messenger (IM) bot, interactive agent, or artificial conversational entity) that includes computer program instructions and/or artificial intelligence capable of conducting a conversation with a communicator via auditory methods.” And p. 44, “A bot is deployed to answer an inbound call from a communicator, step 402.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Leddy to incorporate the response template in a call answering bot from Quilici for the honeypot bot from Leddy to convincingly simulate human conversation. (p. 18, “The exemplary embodiments are directed to other types of scams besides the advanced-fee scam. Among these other types of scams are fake charities, dating and romance, buying or selling, jobs and investment, attempts to gain your personal information and threats and extortion, to name a few.”). Leddy-Quilici does not specifically teach intercepting a scam communication request intended for a user device of a targeted user; personalizing the second response based on personal information of the targeted user, wherein the second response is personalized to cause the scam device to continue the scam communication communicating the second response to the scam device in the scam communication; dynamically modifying the second response during the scam communication based on interactions with the scam device in the scam communication; and continuing to initiate communication with the scam device using the dynamically modified second response during the scam communication causing the scam device to continue engaging in the scam communication without ending the scam communication, wherein causing the scam device to continue engaging in the scam communication without ending the scam communication occupies resources of the scam device and prevents the scam device from being able to communicate with other potential scam targets. However, in the same field of endeavor, Bahrs teaches at a scam response agent in a network, intercepting a scam communication request intended for a user device of a targeted user (p. 43, “The cognitive engine agent 20 may monitor a user's message accounts for potentially scam messages, box 42.” And p. 33, “The cognitive engine agent 20, which may also be referred to as a bot, works in conjunction with a cognitive system 26, which may be located within the same computer system 10 or external to it,” See also p. 25, “The exemplary embodiments apply to any electronic textual messaging system including but not limited to email, SMS messages and Speech to Text systems where a scam phone call, such as in a Voice Over IP (VOIP) system may be converted to a text message on the fly.” The targeted user is represented by the account.); personalizing the second response based on personal information of the targeted user, wherein the second response is personalized to cause the scam device to continue the scam communication (p. 36, “Among other functions, the cognitive system 26 is able to understand and interpret the messages received through natural language understanding, identify and extract several characteristics (parameters or variables) from the messages and create a personalized response that will be used to “intelligently interact” with the message sender (by creating natural language dialogues). All of the foregoing may be saved as a record on a database.”); communicating the second response to the scam device in the scam communication (p. 46, “Once the received message is determined to be a scam message, the cognitive system 26 may interactively converse with the scam message sender thru the cognitive engine agent 20, box 46.”); dynamically modifying the second response during the scam communication based on interactions with the scam device in the scam communication (p. 34, “the cognitive engine agent 20 may be connected to external messaging server 28 to gather, send and receive messages, as well as other related tasks.” And p. 37, “By “intelligently interact”, it is meant that there are no pre-written scripts for the cognitive system 26 to use. Rather, the cognitive system 26 creates replies based on the natural language understanding of the sender's message.”); and continuing to initiate communication with the scam device using the dynamically modified second response during the scam communication causing the scam device to continue engaging in the scam communication without ending the scam communication (p. 40, “In addition to other functions, the cognitive engine agent 20 may monitor mailboxes for scam messages, manage “bait” message accounts, send customized responses to scammers, manage scam messages received and manage a database of scam message samples.” And p. 26, “The interactive conversation may continue with the scam message sender until the scam message sender no longer responds to messages from the cognitive engine agent 20.”), wherein causing the scam device to continue engaging in the scam communication without ending the scam communication occupies resources of the scam device and prevents the scam device from being able to communicate with other potential scam targets (p. 15, “Scam baiting is engaging into a dialogue with the scammers while posing as a potential victim with the intent to waste their time and resources, which reduces the time and resources they have available to engage more acts of digital deception. In other words, the more time they believe they have a successful scam underway, the less time they can be contacting others to perpetuate their scams. Scam baiting may also gather information that will be of use to authorities, and publicly expose the scammer.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify bot responses from Leddy-Quilici by incorporating strategies from Bahrs to be a more effective deterrent against scammers. The motivation would have been to make the scammer work harder. (See Bahrs, p.16, “While scam baiting can be an effective deterrent against scammers, the problem is that scam baiting has a major constraint which is that there are a limited number of people willing or interested in investing the time required to engage with the scammer, and therefore the scammers greatly outnumber the scam baiting community.”) Claim 1 is a method for managing a network based scam response comprising steps similar to the operations of system 18 above. Claim 1 is rejected on the same rationale. Claim 11, Leddy-Quilici-Bahrs teaches one or more non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed (Leddy, p. 54, “The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor…”), perform a method similar to the operations of claim 17 above. Claim 11 is rejected on the same rationale. Claim 18, Leddy-Quilici-Bahrs teaches the method of claim 16, further comprising determining that a request to communicate is the scam communication request (Leddy, p. 824, “The honeypot account can also be set up to be contacted by scammers.”). [Claim 12 recites similar language and is rejected on a similar rationale.]. Claim 19, Leddy-Quilici-Bahrs teaches the method of claim 16, wherein the scam communication request is intercepted prior to the scam communication request being communicated to the user device (Leddy, p. 824, “The honeypot account is configured to communicate with known or potential scammers in order to obtain example spam and scam messages.”). [Claim 5 recites similar language and is rejected on a similar rationale.]. [Claim 13 recites similar language and is rejected on a similar rationale.]. Claim 20, Leddy-Quilici-Bahrs teaches the method of claim 16, wherein the scam type is determined using a pattern recognition algorithm (Leddy, p. 813, “New scams types and scam patterns can be identified more quickly because they are received directly from the scammer into a Honeypot account, rather than waiting for the pattern to be detected in third party email accounts.”) and a machine learning algorithm (Leddy, p. 64, “The filter engine can be run in a production mode (e.g., for analyzing messages in a commercial context) or in a test mode (e.g., for performing training). Messages that are processed through the production mode can also be used to perform training/updating.” And p. 116, “Training module 176 is configured to perform training/dynamic updating of filters.” See also p. 136, “the training module is configured to use machine learning techniques to perform training.”). [Claim 9 recites similar language and is rejected on a similar rationale.]. [Claim 16 recites similar language and is rejected on a similar rationale.]. Claim 2, Leddy-Quilici-Bahrs teaches the method of claim 1, wherein the scam communication request is associated with a known scamming entity (Leddy, p. 363, “each Rule has a distinct ScamScore indicating the likelihood of scam. If a message matches one of the Filters, it is recorded as ‘ hit’.”). Claim 3, Leddy-Quilici-Bahrs teaches the method of claim 1, further comprising: receiving a request to communicate with the user device (Quilici, p. 45, “A bot is deployed to answer an inbound call from a communicator, step 402.”); determining that the request to communicate with the user device is the scam communication request (Quilici, p. 45, “The bot may answer inbound calls from a given set of “honeypot” phone numbers. That is, the given set of honeypot phone numbers may be a predetermined list of phone numbers from known unwanted and/or wanted communicators.” Unwanted communicators include scam callers.). Claim 4, Leddy-Quilici-Bahrs teaches the method of claim 3, wherein the determining that the request to communicate with the user device is the scam communication request is done using machine learning (Leddy, p. 234, “At 186, the obtained first message is evaluated using a production filter set. Examples of filters include URL filters, phrase filters, etc., such as those described above. “ And p. 240, “the training data is obtained from an autoresponder. Further details regarding autoresponders are described below.” “Ham messages (e.g., messages known to not be spam or scam) can also be similarly collected.” See also p. 242, “updating the filter set includes a complete retraining of the entire filter set/dynamic updating system/platform. In some embodiments, updating the filter set includes performing an incremental retrain …“). Claim 6, Leddy-Quilici-Bahrs teaches the method of claim 1, wherein the first response is a generic phone call answer (Leddy, p. 825, “the honeypot account can respond to messages using a contextual response generator such as the Eliza responder, which provides (generic) responses that are relevant to received messages.”). [Claim 14 recites similar language and is rejected on a similar rationale.] Claim 7, Leddy-Quilici-Bahrs teaches the method of claim 1, wherein the scam type is determined using pattern recognition (Leddy, p. 800, “For scambaiting, attractive looking targets referred to as “honeypots” are created and presented to scammers to contact. These honeypot accounts can be made visible to scammers in a variety of way…” And p. 813, “New scams types and scam patterns can be identified more quickly because they are received directly from the scammer into a Honeypot account, rather than waiting for the pattern to be detected in third party email accounts.”). [Claim 15 recites similar language and is rejected on a similar rationale.]. Claim 8, Leddy-Quilici-Bahrs teaches the method of claim 1, wherein the scam type is determined using machine learning (Leddy, p. 1653, “One way this can be done is to limit false positives by running a machine learning based classifier on all messages that are identified as being likely scam messages, … ”). Claim 14, Leddy-Quilici-Bahrs teaches the method of claim 11, wherein the first response is a generic phone call answer (Leddy, p. 825, “… the honeypot account can respond to messages using a contextual response generator such as the Eliza responder, which provides (generic) responses that are relevant to received messages.”). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leddy and Quilici and Bahrs as applied to claim 1 above, and further in view of Baracaldo Angel et al., US 20180240473 A1 (hereafter referred to as Angel). Claim 10, Leddy-Quilici-Bahrs teaches the method of claim 1, as cited above. Leddy-Quilici-Bahrs does not specifically teach wherein the second response is determined based on a probability that the scam device will respond to the second response. However, in the same field of endeavor, Angel teaches teach wherein the second response is determined based on a probability that the scam device will respond to the second response (p. 28, “For example, the bot 211 may select the conversation template from the data set 430. The use of the conversation template increases the likelihood the caller 10 perceives the bot 211 as a human being, not a bot.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Leddy-Quilici-Bahrs to substitute scam type template from Angel for the conversation from Leddy-Quilici-Bahrs to increase the likelihood that the conversation is perceived to be with a human being and thereby respond to the bot. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICE L WINDER whose telephone number is (571)272-3935. The examiner can normally be reached M-F 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMAL B DIVECHA can be reached at (571)272-5863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Patrice L Winder/Primary Examiner, Art Unit 2453
Read full office action

Prosecution Timeline

Dec 28, 2022
Application Filed
Sep 30, 2024
Non-Final Rejection — §103
Dec 31, 2024
Response Filed
Apr 19, 2025
Final Rejection — §103
Jul 24, 2025
Request for Continued Examination
Jul 29, 2025
Response after Non-Final Action
Sep 20, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Mar 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598228
SYSTEM AND A METHOD FOR DISTRIBUTING INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593205
NETWORK SLICE-SPECIFIC AUTHENTICATION AND AUTHORIZATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587396
SYSTEMS AND METHODS FOR RECOMMENDING NETWORK PROCESSING ROUTES WHEN CONDUCTING NETWORK OPERATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12580812
COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12580965
SYSTEM AND METHOD FOR MANAGING COMPLIANCE FAILURES BASED ON ESTIMATIONS FOR REMEDIATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
87%
Grant Probability
98%
With Interview (+11.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 632 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month