Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,372

SYSTEMS AND METHODS FOR DETECTING AND PREVENTING BOT TRAFFIC IN MESSAGING SYSTEMS

Non-Final OA §102
Filed
May 20, 2024
Examiner
KHADKA, AMIT
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Pushnami LLC
OA Round
1 (Non-Final)
20%
Grant Probability
At Risk
1-2
OA Rounds
3y 6m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-38.0% vs TC avg
Minimal -20% lift
Without
With
+-20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
22 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
69.9%
+29.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-21 are rejected under 35 U.S.C. 102 (a)(1) and (a)(2) as being anticipated by Rieschick (US20120233656A1). Regarding Claim 1, Rieschick teaches: A notification system including fraud detection, comprising: a processor; and a non-transitory computer readable medium comprising instruction for (Rieschick, para 39, discloses notification server 128; para 29, Rieschick discloses detection of both known and yet-to-be identified security threats (para 8, internet bot) by performing pattern matching and observed behavior detection methods); maintaining a subscription data store, the subscription data store comprising a set of subscription entries for notifications from the notification system, each subscription entry associated with a unique user identifier for a user and a corresponding web site where the user is subscribed to notifications from the notification system (Rieschick, para 47, discloses the user repository 112 storing subscriber information (e.g., customer IDs, preferences, subscription levels, etc.) and user-specific information, such as details of a user's subscription to malware prevention services and historical details of a user's exposure to specific malware threats; para 39, Rieschick discloses the notification server 128 may be configured to receive requests from the policy management system 116 (e.g., PCRF) and send messages to mobile devices 102 using various technologies/mechanisms, such as email, short message service (SMS) texts, instant messaging (IM) systems, interactive voice response (IVR) systems; para 29, Rieschick discloses the system also maps specific malware threats identified to users of the network using IP addresses. This implies the user repository maintains subscription-related data for each user, which can be linked to a source of malware/threat (e.g., website) that is being monitored.); maintaining a block list of unique user identifiers at the notification system (Rieschick, para 61, discloses comparing IP address information with a list of known offender IP addresses which implies that the system is maintaining a list of blacklisted addresses); receiving one or more events based on user interaction with a first website at a user device of a first user, wherein the first website was provided from a first content provider (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67).); generating the unique user identifier for the first user (Rieschick, para 29, discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses. The system maps specific malware threats to users of the network and IP address serves as the unique identifier.); based on the one or more events, determining that the first user is to be blocked from receiving notifications from the notification system because the first user is a bot (Rieschick, para 57, discloses the system extracts binary file characteristic information URL, IP addresses from the parameters; para 58, Rieschick discloses the system checks if a data flow has binary-file characteristics of known malware types (e.g., bots, viruses, worms, Trojan Horses, etc.), if a match is found, then it generates restrictive policies rules which includes blocking.); placing the unique user identifier on the block list (Rieschick, para 68, discloses after the identification of bot, the system registers the spam engine i.e., update a spam engine list in memory; para 66, Rieschick discloses registering the destination as being under attack which is the act of placing an identifier/Ip address on a block list.); and evaluating each subscription entry associated with the first unique user identifier and, for each subscription entry associated with the first unique user identifier updating that subscription entry with a blocked user flag (Rieschick, para 47, discloses the system maintains a database with user specific details like customer ID about the malware threats; para 29, Rieschick discloses that the user specific malware detection information which is stored is later used by the system to take action in future like making new policy decision and detecting and preventing future malware events. Rieschick discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses.). Regarding Claim 2, Rieschick teaches: The notification system of claim 1, wherein determining that the first user is to be blocked from receiving notifications comprises applying a heuristic model to the one more or more events associated with the first user (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67); para 60, Rieschick discloses spam heuristics (e.g., incorrect SMTP headers, origin IP address blacklisting, email not relayed, message body contains blacklisted keywords, message attachment scanning using Optical Character Recognition (OCR) techniques, Artificial Intelligence (AI) based methods such as Bayesian filtering, or any combination of these) to determine if the emails originate from a spam-sending bot. Based on the determination the system generate block policy for the data flow.) Regarding Claim 3, Rieschick teaches: The notification system of claim 2, wherein the application of the heuristic model to the one or more events is done asynchronously to user interaction with the first website (Rieschick, Fig 4A, Steps 406-410 discloses that the gateway/PCEF sends the generated parameter to a policy management system for analysis, then receives one or more policies, and uses the received policy rules to enforce. Para 40, Rieschick also discloses the notification server may be configured to send messages 'in-band' or 'out-of-band' and in-band notification may appear as a new web page, which implies heuristic evaluation is asynchronous to the website session.) Regarding Claim 4, Rieschick teaches: The notification system of claim 3, wherein the heuristic model is a machine learning model (Rieschick, para 29, discloses that the system may include machine learning mechanisms to enable the malware protection system to learn about malware-generated transmissions and recognize malware patterns as they develop.) Regarding Claim 5, Rieschick teaches: The notification system of claim 1, wherein the instructions are further for: receiving one or more events based on user interaction with a second website at the user device of the first user, wherein the second website was provided from a second content provider; and determining the user is blocked from receiving notifications in association with the second website based on the block list or the subscription data store (Rieschick, para 46-47, discloses policies are set per subscriber (made available to a particular subscriber) and decide whether that subscriber's IP flows are blocked or allowed, with that subscriber's status stored in user repository 122 and reinforced by block lists (update a span engine list in memory generate rules to block or limit data) (para 68); para 37-39, Rieschick implies because the gateway applies those rules to all of that subscriber's flow across the internet/external networks, Which implies the same user will be blocked at a second website based on the block list.) Regarding Claim 6, Rieschick teaches: The notification system of claim 1, wherein the instructions are further for storing blocking data at the user device, wherein the blocking data is adapted to prevent an edge agent on the user device from requesting notifications from the notification system (Rieschick, para 30, discloses push client and its instructions are data stored on the device. Those instructions can disable data or change an application's behavior and prevent the client application (that requests from server) from making those requests (para 77); Rieschick, para 39-41 discloses the notification server sending messages to mobile device and client application running on mobile device and also receiving response back.) Regarding Claim 7, Rieschick teaches: The notification system of claim 1, wherein the instructions are further for: in response to determine that the user is to be blocked redirecting, by the notification system, the user to a blocked web page of the web site provided by the first content provider (Rieschick, para 46, discloses after the system decides to block; para 40, Rieschick discloses that notification server can send message to mobile devices in band which redirects the user to a new web page (blocked web page) implied by para 46 after system decides to block.) Regarding Claim 8, Rieschick teaches: A method for fraud detection in a notification system, comprising: (Rieschick, para 39, discloses notification server 128; para 29, Rieschick discloses detection of both known and yet-to-be identified security threats (para 8, internet bot) by performing pattern matching and observed behavior detection methods); maintaining a subscription data store, the subscription data store comprising a set of subscription entries for notifications from the notification system, each subscription entry associated with a unique user identifier for a user and a corresponding web site where the user is subscribed to notifications from the notification system (Rieschick, para 47, discloses the user repository 112 storing subscriber information (e.g., customer IDs, preferences, subscription levels, etc.) and user-specific information, such as details of a user's subscription to malware prevention services and historical details of a user's exposure to specific malware threats; para 39, Rieschick discloses the notification server 128 may be configured to receive requests from the policy management system 116 (e.g., PCRF) and send messages to mobile devices 102 using various technologies/mechanisms, such as email, short message service (SMS) texts, instant messaging (IM) systems, interactive voice response (IVR) systems; para 29, Rieschick discloses the system also maps specific malware threats identified to users of the network using IP addresses. This implies the user repository maintains subscription-related data for each user, which can be linked to a source of malware/threat (e.g., website) that is being monitored.); maintaining a block list of unique user identifiers at the notification system (Rieschick, para 61, discloses comparing IP address information with a list of known offender IP addresses which implies that the system is maintaining a list of blacklisted addresses); receiving one or more events based on user interaction with a first website at a user device of a first user, wherein the first website was provided from a first content provider (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67).); generating the unique user identifier for the first user (Rieschick, para 29, discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses. The system maps specific malware threats to users of the network and IP address serves as the unique identifier.); based on the one or more events, determining that the first user is to be blocked from receiving notifications from the notification system because the first user is a bot (Rieschick, para 57, discloses the system extracts binary file characteristic information URL, IP addresses from the parameters; para 58, Rieschick discloses the system checks if a data flow has binary-file characteristics of known malware types (e.g., bots, viruses, worms, Trojan Horses, etc.), if a match is found, then it generates restrictive policies rules which includes blocking.); placing the unique user identifier on the block list (Rieschick, para 68, discloses after the identification of bot, the system registers the spam engine i.e., update a spam engine list in memory; para 66, Rieschick discloses registering the destination as being under attack which is the act of placing an identifier/Ip address on a block list.); and evaluating each subscription entry associated with the first unique user identifier and, for each subscription entry associated with the first unique user identifier updating that subscription entry with a blocked user flag (Rieschick, para 47, discloses the system maintains a database with user specific details like customer ID about the malware threats; para 29, Rieschick discloses that the user specific malware detection information which is stored is later used by the system to take action in future like making new policy decision and detecting and preventing future malware events. Rieschick discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses.). Regarding Claim 9, Rieschick teaches: The method of claim 8, wherein determining that the first user is to be blocked from receiving notifications comprises applying a heuristic model to the one more or more events associated with the first user (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67); para 60, Rieschick discloses spam heuristics (e.g., incorrect SMTP headers, origin IP address blacklisting, email not relayed, message body contains blacklisted keywords, message attachment scanning using Optical Character Recognition (OCR) techniques, Artificial Intelligence (AI) based methods such as Bayesian filtering, or any combination of these) to determine if the emails originate from a spam-sending bot. Based on the determination the system generate block policy for the data flow.) Regarding Claim 10, Rieschick teaches: The method of claim 9, wherein the application of the heuristic model to the one or more events is done asynchronously to user interaction with the first website (Rieschick, Fig 4A, Steps 406-410 discloses that the gateway/PCEF sends the generated parameter to a policy management system for analysis, then receives one or more policies, and uses the received policy rules to enforce. Para 40, Rieschick also discloses the notification server may be configured to send messages 'in-band' or 'out-of-band' and in-band notification may appear as a new web page, which implies heuristic evaluation is asynchronous to the website session.) Regarding Claim 11, Rieschick teaches: The method of claim 10, wherein the heuristic model is a machine learning model (Rieschick, para 29, discloses that the system may include machine learning mechanisms to enable the malware protection system to learn about malware-generated transmissions and recognize malware patterns as they develop.) Regarding Claim 12, Rieschick teaches: The method of claim 8, wherein the instructions are further comprising: receiving one or more events based on user interaction with a second website at the user device of the first user, wherein the second website was provided from a second content provider; and determining the user is blocked from receiving notifications in association with the second website based on the block list or the subscription data store (Rieschick, para 46-47, discloses policies are set per subscriber (made available to a particular subscriber) and decide whether that subscriber's IP flows are blocked or allowed, with that subscriber's status stored in user repository 122 and reinforced by block lists (update a span engine list in memory generate rules to block or limit data) (para 68); para 37-39, Rieschick implies because the gateway applies those rules to all of that subscriber's flow across the internet/external networks, Which implies the same user will be blocked at a second website based on the block list.) Regarding Claim 13, Rieschick teaches: The method of claim 8, further comprising: storing blocking data at the user device, wherein the blocking data is adapted to prevent an edge agent on the user device from requesting notifications from the notification system (Rieschick, para 30, discloses push client and its instructions are data stored on the device. Those instructions can disable data or change an application's behavior and prevent the client application (that requests from server) from making those requests (para 77); Rieschick, para 39-41 discloses the notification server sending messages to mobile device and client application running on mobile device and also receiving response back.) Regarding Claim 14, Rieschick teaches: The method of claim 8, further comprising: in response to determine that the user is to be blocked redirecting, by the notification system, the user to a blocked web page of the web site provided by the first content provider (Rieschick, para 46, discloses after the system decides to block; para 40, Rieschick discloses that notification server can send message to mobile devices in band which redirects the user to a new web page (blocked web page) implied by para 46 after system decides to block.) Regarding Claim 15, Rieschick teaches: A non-transitory computer readable medium, comprising instructions for (Rieschick, para 39, discloses notification server 128; para 29, Rieschick discloses detection of both known and yet-to-be identified security threats (para 8, internet bot) by performing pattern matching and observed behavior detection methods); maintaining a subscription data store, the subscription data store comprising a set of subscription entries for notifications from the notification system, each subscription entry associated with a unique user identifier for a user and a corresponding web site where the user is subscribed to notifications from the notification system (Rieschick, para 47, discloses the user repository 112 storing subscriber information (e.g., customer IDs, preferences, subscription levels, etc.) and user-specific information, such as details of a user's subscription to malware prevention services and historical details of a user's exposure to specific malware threats; para 39, Rieschick discloses the notification server 128 may be configured to receive requests from the policy management system 116 (e.g., PCRF) and send messages to mobile devices 102 using various technologies/mechanisms, such as email, short message service (SMS) texts, instant messaging (IM) systems, interactive voice response (IVR) systems; para 29, Rieschick discloses the system also maps specific malware threats identified to users of the network using IP addresses. This implies the user repository maintains subscription-related data for each user, which can be linked to a source of malware/threat (e.g., website) that is being monitored.); maintaining a block list of unique user identifiers at the notification system (Rieschick, para 61, discloses comparing IP address information with a list of known offender IP addresses which implies that the system is maintaining a list of blacklisted addresses); receiving one or more events based on user interaction with a first website at a user device of a first user, wherein the first website was provided from a first content provider (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67).); generating the unique user identifier for the first user (Rieschick, para 29, discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses. The system maps specific malware threats to users of the network and IP address serves as the unique identifier.); based on the one or more events, determining that the first user is to be blocked from receiving notifications from the notification system because the first user is a bot (Rieschick, para 57, discloses the system extracts binary file characteristic information URL, IP addresses from the parameters; para 58, Rieschick discloses the system checks if a data flow has binary-file characteristics of known malware types (e.g., bots, viruses, worms, Trojan Horses, etc.), if a match is found, then it generates restrictive policies rules which includes blocking.); placing the unique user identifier on the block list (Rieschick, para 68, discloses after the identification of bot, the system registers the spam engine i.e., update a spam engine list in memory; para 66, Rieschick discloses registering the destination as being under attack which is the act of placing an identifier/Ip address on a block list.); and evaluating each subscription entry associated with the first unique user identifier and, for each subscription entry associated with the first unique user identifier updating that subscription entry with a blocked user flag (Rieschick, para 47, discloses the system maintains a database with user specific details like customer ID about the malware threats; para 29, Rieschick discloses that the user specific malware detection information which is stored is later used by the system to take action in future like making new policy decision and detecting and preventing future malware events. Rieschick discloses mapping specific malware threats identified to users of the network. Malware threats may be identified using any suitable mechanism, including the use of IP addresses.). Regarding Claim 16, Rieschick teaches: The non-transitory computer readable medium of claim 15, wherein determining that the first user is to be blocked from receiving notifications comprises applying a heuristic model to the one more or more events associated with the first user (Rieschick, para 57, discloses the system may receive parameters identifying certain key characteristics of the IP data flows and/or network traffic from the gateway/PCEF (para 55) reflecting user's web activity (para 67); para 60, Rieschick discloses spam heuristics (e.g., incorrect SMTP headers, origin IP address blacklisting, email not relayed, message body contains blacklisted keywords, message attachment scanning using Optical Character Recognition (OCR) techniques, Artificial Intelligence (AI) based methods such as Bayesian filtering, or any combination of these) to determine if the emails originate from a spam-sending bot. Based on the determination the system generate block policy for the data flow.) Regarding Claim 17, Rieschick teaches: The non-transitory computer readable medium of claim 16, wherein the application of the heuristic model to the one or more events is done asynchronously to user interaction with the first website (Rieschick, Fig 4A, Steps 406-410 discloses that the gateway/PCEF sends the generated parameter to a policy management system for analysis, then receives one or more policies, and uses the received policy rules to enforce. Para 40, Rieschick also discloses the notification server may be configured to send messages 'in-band' or 'out-of-band' and in-band notification may appear as a new web page, which implies heuristic evaluation is asynchronous to the website session.) Regarding Claim 18, Rieschick teaches: The non-transitory computer readable medium of claim 17, wherein the heuristic model is a machine learning model (Rieschick, para 29, discloses that the system may include machine learning mechanisms to enable the malware protection system to learn about malware-generated transmissions and recognize malware patterns as they develop.) Regarding Claim 19, Rieschick teaches: The non-transitory computer readable medium of claim 15, wherein the instructions are further for: receiving one or more events based on user interaction with a second website at the user device of the first user, wherein the second website was provided from a second content provider; and determining the user is blocked from receiving notifications in association with the second website based on the block list or the subscription data store (Rieschick, para 46-47, discloses policies are set per subscriber (made available to a particular subscriber) and decide whether that subscriber's IP flows are blocked or allowed, with that subscriber's status stored in user repository 122 and reinforced by block lists (update a span engine list in memory generate rules to block or limit data) (para 68); para 37-39, Rieschick implies because the gateway applies those rules to all of that subscriber's flow across the internet/external networks, Which implies the same user will be blocked at a second website based on the block list.) Regarding Claim 20, Rieschick teaches: The non-transitory computer readable medium of claim 15, wherein the instructions are further for storing blocking data at the user device, wherein the blocking data is adapted to prevent an edge agent on the user device from requesting notifications from the notification system (Rieschick, para 30, discloses push client and its instructions are data stored on the device. Those instructions can disable data or change an application's behavior and prevent the client application (that requests from server) from making those requests (para 77); Rieschick, para 39-41 discloses the notification server sending messages to mobile device and client application running on mobile device and also receiving response back.) Regarding Claim 21, Rieschick teaches: The non-transitory computer readable medium of claim 15, wherein the instructions are further for: in response to determine that the user is to be blocked redirecting, by the notification system, the user to a blocked web page of the web site provided by the first content provider (Rieschick, para 46, discloses after the system decides to block; para 40, Rieschick discloses that notification server can send message to mobile devices in band which redirects the user to a new web page (blocked web page) implied by para 46 after system decides to block.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIT KHADKA whose telephone number is (703)756-1440. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L. Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIT KHADKA/Examiner, Art Unit 2432 /Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Sep 16, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567042
NONFUNGIBLE TOKEN PATH SYNTHESIS WITH SOCIAL SHARING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
20%
Grant Probability
0%
With Interview (-20.0%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month