Prosecution Insights
Last updated: April 19, 2026
Application No. 18/752,090

SYSTEMS AND METHODS FOR AUTOMATICALLY DIVERTING DATA TRANSMISSION FROM COMPROMISED PROCESSING COMPONENTS

Final Rejection §103
Filed
Jun 24, 2024
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the reply filed on 12/8/2025. Claims 6 and 15 have been cancelled. Claim 21 was added as New. Claims 1-5, 7-14 and 16-21 are pending. This Office Action is Final. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 10 and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments and amendments, regarding 35 USC 101 for being an Abstract idea has been considered and deemed persuasive. As a result, these rejections have been Withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-5, 9-14 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US 2024/0323208) in view of Wong et al. (US 2022/0214677), Bingham et al. (US 11,038,906) and Chiu et al. (US 2022/0337602). As per claim 1, Divakaran teaches a system for automatically diverting data transmissions from compromised processing components, the system comprising: a processing device; a non-transitory storage device containing instructions when executed by the processing device (Divakaran, Paragraph 0070 “As shown, the computer system 20 includes a central processing unit (CPU) 21, a system memory 22, and a system bus 23 connecting the various system components, including the memory associated with the central processing unit 21.”), causes the processing device to perform the steps of: identify a processing component associated with a data transmission (Divakaran, Paragraph 0061 recites “At 504, device monitoring component 106 identifies, from the first plurality of packets, a subset of packets corresponding to a first device of network 101. For example, the first device may be device 102a. For example, device monitoring component 106 may include packets transmitted and/or received by device 102a during a given period of time (e.g., 10 minutes) in the subset of packets.”); determine whether to transmit the data transmission based on the determination whether the anomaly is present in the processing component or the determination whether the data transmission matches at least one pre-identified compromised data transmission pattern (Divakaran, Paragraphs 0065-0066 recites “At 518, device monitoring component 106 calculates a risk score associated with device 102a based on the deviation, the first probability, and the second probability. For example, the risk score may be a function of these three values. In some aspects, the risk score may be a normal average, a weighted average (where each value is weighted differently), or a median of the three values. For example, the values are 50%, 75%, and 85%, as described, the average (i.e., the risk score) is 70. At 520, device monitoring component 106 determines if the risk score is greater than a preset threshold risk score (e.g., 65). In response to determining that the risk score is not greater than the threshold risk score, method 500 returns to 502, where device monitoring component 106 intercepts a new set of packets. In response to determining that the risk score is greater than the threshold risk score, method 500 advances to 522, where device monitoring component 106 determines, using an attack classification model, an attack type of the first device, and executes a remediation action based on the attack type to resolve anomalous behavior in device 102a.”); and generate an alert interface component based on the determination whether the anomaly is present in the processing component or the determination whether the data transmission matches at least one pre-identified compromised data transmission pattern (Divakaran, Paragraph 0067 recites “In some aspects, the remediation action comprises one or more of: factory resetting device 102a, rebooting device 102a, transmitting an alert about the anomalous behavior of device 102a to a network administrator of network 101, blocking of the traffic of a device, re-routing of the traffic of a device to a security middlebox, and removing device 102a from network 101.”). But fails to teach determine, by an artificial intelligence (AI) engine, whether an anomaly is present in the processing component based on identifying central processing unit utilization being overburdened. However, in an analogous art Wong teaches determine, by an artificial intelligence (AI) engine, whether an anomaly is present in the processing component based on identifying central processing unit utilization being overburdened (Wong, Paragraph 0020 recites “In particular embodiments, the microcontroller 213 may determine that an anomalous event has occurred on the electronic device 100 by processing the one or more real-time sensor data with a machine-learning model running on the microcontroller 213. In particular embodiments, the microcontroller 213 may process the one or more real-time sensor data with the machine-learning model 215 at a regular interval. In particular embodiments, the machine-learning model 215 may take a snapshot of the one or more real-time sensor data as input. In particular embodiments, the machine-learning model 215 may be a recursive neural network that considers a trend of the one or more real-time sensor data to produce an output. In particular embodiments, the machine-learning model 215 may be a binary classifier that may determine whether an anomalous event has occurred on the electronic device 100. In particular embodiments, the machine-learning model 215 may be a multiclass classifier that may determine what type of anomalous event has occurred. As an example and not by way of limitation, continuing with a prior example, a TinyML-based classifier 215 may be installed on the sensor hub 213. TinyML may be used for embedded machine-learning applications. TinyML may be optimized for resource constrained environments. For example, a TinyML-based machine-learning model may run on a microcontroller with a few kilobytes memory and the processing power in a few megahertz. The sensor hub 213 may process the sensor data from the one or more sensors 211 with the classifier 215 at a predefined interval. The classifier 215 may be a Long Short-Term Memory (LSTM) model that may take a sequence of the sensor data to determine whether an anomalous event has occurred on the mobile phone 100. In particular embodiments, the classifier 215 may produce a binary output indicating whether an anomalous event has occurred on the mobile phone 100 or not. In particular embodiments, the classifier 215 may produce one of a plural values. Each of the plural values may indicate a type of anomalous event including, but not limited to, no anomalous event, an anomalous event associated with overloaded processors, an anomalous event associated with high memory utilization, an anomalous event associated with lost network connectivity, an anomalous event associated with high network utilization, an anomalous event associated with high device temperature, or an anomalous event associated with any suitable condition for an anomalous event. Although this disclosure describes determining that an anomalous event has occurred on an electronic device in a particular manner, this disclosure contemplates determining that an anomalous event has occurred on an electronic device in any suitable manner.” And Paragraph 0042 recites “FIG. 8 illustrates a diagram 800 of an example artificial intelligence (AI) architecture 802 that may be utilized to perform determining whether an anomalous event has occurred on an electronic device, in accordance with the presently disclosed embodiments.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Wong’s Detecting anomalous events using a microcontroller with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of using AI to help determine anomalous events. But fails to teach determine, by a machine learning model, whether the data transmission matches at least one pre-identified compromised data transmission pattern. However, in an analogous art Bingham teaches determine, by a machine learning model, whether the data transmission matches at least one pre-identified compromised data transmission pattern (Bingham, Col. 5 Lines 24-27 recites “To do so, network traffic data is analyzed using a machine-learning mechanism that identifies potentially compromised or malicious computing devices based on patterns within the network traffic data. Once identified, the candidate computing devices are interrogated by the system by transmitting data and/or messages to the candidate computing devices.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Bingham’s Network Threat Validation And Monitoring with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of having another layer of security when monitoring network traffic. And fails to teach apply, if the data transmission is determined to match at least one pre-identified compromised data transmission, the data transmission and the determination that the data transmission matches at least one pre-identified compromised data transmission pattern to a smart contract; and trigger, by the smart contract, an automatic block of the data transmission. However, in an analogous art Chiu teaches teach apply, if the data transmission is determined to match at least one pre-identified compromised data transmission, the data transmission and the determination that the data transmission matches at least one pre-identified compromised data transmission pattern to a smart contract; and trigger, by the smart contract, an automatic block of the data transmission (Chiu, Paragraph 0032 recites “Some embodiments are based on a recognition that the smart contract is configured to detect malicious behavior of a node in the permissioned blockchain network based on analysis of one or more audit logs generated by the networked computer in response to execution of the smart contract and the corresponding transactions of the permissioned blockchain network.” And Paragraph 0115 recites “Once, the audit data logs are submitted in this manner, at step 706, any malicious behavior on part of the node 150 may be detected, and if required, permissions of the node 150 may be revoked.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Chiu’s Blockchain-Based Accountable Distributed Computing System with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of automating security measures when malicious behavior is detected. As per claim 2, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, Divakaran further teaches wherein determining whether an anomaly is present in the processing component is further based on identifying data transmission data comprises at least one of a current data transmission data or historical data transmission data (Divakaran, Paragraph 0056 recites “Device monitoring component 106 may then generate a respective deterministic profile based on the extracted deterministic features of the respective device. In some aspects, the respective deterministic profile is a hash table comprising hash values representing the extracted deterministic features. For example, each deterministic feature identified above may be inputted into a hashing algorithm and the output may be stored in the deterministic profile. Subsequently, when the respective device is evaluated at a later time (e.g., during method 500), the values in the deterministic profile may be compared against the deterministic features extracted at that time. In addition to generating a deterministic profile, device monitoring component 106 may generate, for each respective device of the plurality of devices 102, a device-specific training dataset (included in training datasets 118) comprising a plurality of feature vectors labelled by anomalous or non-anomalous classes using the plurality of packets. For example, a given training dataset may include device-specific information about device 102a. The feature vectors of that training dataset may include one or more features such as a size of each packet associated with the device, time intervals between transmitted/received packets, direction of packets, semantic information of IP addresses and port numbers, sizes of connections (e.g., using 5-tuple information of source/destination IP addresses, source/destination ports, protocol, etc. The feature vector may further be labelled as anomalous/non-anomalous, although this is not necessary for the proposed unsupervised approach.”). As per claim 3, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, Bingham further teaches wherein the data transmission comprises a plurality of data transmissions, and wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: segment, by the machine learning model, the plurality of data transmissions and generate the data transmission based on the segmented plurality of data transmissions; and automatically block, based on the determination that the data transmission matches at least one pre-identified compromised data transmission, the generated data transmission (Bingham, Col. 11 Lines 12-33 recites “The server computing device 102 may also initiate various remedial measures in order to address the identified threat. In certain implementations, for example, the server computing device 102 may be configured to generate a report, an alert, an alarm, a tag, or any other notification that may be transmitted to another computing device, such as the client computing device 110. In other implementations, the server computing device 102 may automatically deploy executable code or issue commands to reconfigure or otherwise modify equipment within the communications network 101. For example, the server computing device 102 may send commands to a switch, a firewall, or a similar network component that causes the network component to be reconfigured to filter out or otherwise block network traffic associated with the validated central server component. The server computing device 102 may also deploy or otherwise initiate deployment of executable code, such as patches, to update computing devices within the communication network 101. Such executable code may, for example, cause the computing devices to initiate malware/virus removal operations or close security vulnerabilities of certain software or firmware of the computing devices.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Bingham’s Network Threat Validation And Monitoring with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of having another layer of security when monitoring network traffic. As per claim 4, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, Divakaran further teaches wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: trigger an automatic transmission of the alert interface component to a user device, wherein the user device is associated with at least one of a user account associated with the data transmission or associated with at least one entity account associated with the processing component (Divakaran, Paragraph 0067 recites “In some aspects, the remediation action comprises one or more of: factory resetting device 102a, rebooting device 102a, transmitting an alert about the anomalous behavior of device 102a to a network administrator of network 101, blocking of the traffic of a device, re-routing of the traffic of a device to a security middlebox, and removing device 102a from network 101.” A network administrator could be a user of the device.). As per claim 5, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 4, Divakaran further teaches wherein the alert interface component comprises an alert indicating the data transmission was transmitted or automatically blocked (Divakaran, Paragraph 0067 recites “In some aspects, the remediation action comprises one or more of: factory resetting device 102a, rebooting device 102a, transmitting an alert about the anomalous behavior of device 102a to a network administrator of network 101, blocking of the traffic of a device, re-routing of the traffic of a device to a security middlebox, and removing device 102a from network 101.”). As per claim 9, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 4, Divakaran further teaches wherein the determination of whether to transmit the data transmission further comprises an analysis, by an application engine, of a rule database and a comparison of the data transmission to the rule database (Divakaran, Paragraph 0042 recites “In some aspects, the network characteristics used for profiling IoT devices may be expanded over time. Similarly, the attributes used to define an attack type can be augmented using information available from various sources (e.g., a threat intelligence database). The profiles created and revised may also be shared with a central server in a privacy-preserving manner to improve accuracy of anomaly detection.”). Regarding claims 10 and 16, claims 10 and 16 are directed to a computer program product and a method associated with the system of claim 1. Claims 10 and 16 are of similar scope to claim 1, and are therefore rejected under similar rationale. Regarding claims 11 and 17, claims 11 and 17 are directed to a computer program product and a method associated with the system of claim 2. Claims 11 and 17 are of similar scope to claim 2, and are therefore rejected under similar rationale. Regarding claims 12 and 18, claims 12 and 18 are directed to a computer program product and a method associated with the system of claim 3. Claims 12 and 18 are of similar scope to claim 3, and are therefore rejected under similar rationale. Regarding claims 13 and 19, claims 13 and 19 are directed to a computer program product and a method associated with the system of claim 4. Claims 13 and 19 are of similar scope to claim 4, and are therefore rejected under similar rationale. Regarding claims 14 and 20, claims 14 and 20 are directed to a computer program product and a method associated with the system of claim 5. Claims 14 and 20 are of similar scope to claim 5, and are therefore rejected under similar rationale. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US 2024/0323208), Wong et al. (US 2022/0214677), Bingham et al. (US 11,038,906) and Chiu et al. (US 2022/0337602) and in further view of Kanso et al. (US 2022/0131888). As per claim 7, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, but fails to teach wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: identify a plurality of processing components and associated current processing component resource capacity for each processing component; and transmit each data transmission to the processing component of the plurality of processing components based on the associated current processing component resource capacity. However, in an analogous art Kanso teaches wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: identify a plurality of processing components and associated current processing component resource capacity for each processing component; and transmit each data transmission to the processing component of the plurality of processing components based on the associated current processing component resource capacity (Kanso, Paragraph 0035 recites “In another example, as described in detail below, vulnerability risk assessment system 102 can further facilitate (e.g., via processor 106): checking specification and privileges associated with the primary-infected pods and the secondary-infected pods, and generating a list of suspect machines, primary-infected machines, and secondary-infected machines; determining total resource capacity that respective infected containers associated with the primary-infected pods and the secondary-infected pods have ability to consume, and generating a total-capacity-at-risk measure; determining permissions associated with the respective infected containers; generating a contextual risk score and an absolute risk score associated with the primary-infected pods and the secondary-infected pods; assessing bounded capacity of at least one of: processor, memory, or disk; and/or generating a contextual risk score and an absolute risk score associated with the primary-infected pods and the secondary-infected pods, and generating a second contextual risk score and a second absolute risk score associated with the primary-infected pods and the secondary-infected pods based on one or more changes.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Kanso’s context based risk assessment of a computing resource vulnerability with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of having another metric to help determine if a resource is compromised or potentially malicious. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US 2024/0323208), Wong et al. (US 2022/0214677), Bingham et al. (US 11,038,906) and Chiu et al. (US 2022/0337602) and in further view of Cui et al. (US 12,028,362). As per claim 8, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, but fails to teach wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: transmit the data transmission to a recipient account based on the determination to transmit the data transmission in an instance where the anomaly is not present in the processing component and in an instance where the data transmission does not match any of the at least one pre-identified compromised data transmission pattern. However, in an analogous art Cui teaches wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: transmit the data transmission to a recipient account based on the determination to transmit the data transmission in an instance where the anomaly is not present in the processing component and in an instance where the data transmission does not match any of the at least one pre-identified compromised data transmission pattern (Cui, Col. 9 Lines 33-40 recites “In some embodiments, a decoder 116 can output a normalcy score as the highest probability value in the probability distribution generated by the normalizing function, and the normalcy score can be used by a request processor 112 or threat detection service 106 to determine whether to allow or disallow execution of requests, to generate alerts of non-anomalous events, and the like.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Cui’s Detecting Anomalous Storage Service Events Using Autoencoders with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of informing parties that there are no anomalous issues. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US 2024/0323208), Wong et al. (US 2022/0214677), Bingham et al. (US 11,038,906) and Chiu et al. (US 2022/0337602) and in further view of Chivu et al. (US 12,238,119). As per claim 21, Divakaran in view of Wong, Bingham and Chiu teaches the system of claim 1, but fails to teach wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: assign, based on determining the anomaly is present in the processing component or determining the data transmission matches at least one pre-identified compromised data transmission pattern, numeric values to data associated with the data transmission to generate numeric training data; apply the numeric training data to the AI engine; and generate, by the AI engine, symbolic AI data to represent the data of the AI engine. However, in an analogous art Chivu teaches wherein the non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: assign, based on determining the anomaly is present in the processing component or determining the data transmission matches at least one pre-identified compromised data transmission pattern, numeric values to data associated with the data transmission to generate numeric training data; apply the numeric training data to the AI engine; and generate, by the AI engine, symbolic AI data to represent the data of the AI engine (Chivu, Col. 3 Line 61 – Col. 4 Line 15 recites “Upon receiving the event dataset 112, the computer system 130 can use AI models 132 to determine an event cluster baseline 114 for the event dataset 112. A first AI model of the AI models 132 can be a variational auto-encoder trained to determine an anomaly score for each event in the event dataset 112. When available, threat intelligence data 134 of known threat event data and associated anomaly scores may be used during training so that the variational auto-encoder learns how to determine anomaly scores for the event dataset 112. During training, the anomaly scores may be used to filter portions of a training event dataset that are to be used for cluster training. For the training, the computer system 130 may compare each anomaly score to a threshold, and events having anomaly scores greater than the threshold may be included in a final training dataset of clustering. During inference, the event dataset 112 may, but need not, be filtered. The events of the event dataset 112 can be input into a second AI model of the AI models 132. The second AI model can be a self-organizing map trained to cluster events. The second AI model can output the event cluster baseline 114. The event cluster baseline 114 can be a set of clusters that group the events.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Chivu’s Determining Threats From Anomalous Events Based On Artificial Intelligence Models with Divakaran’s systems and methods for detecting anomalous behavior in internet-of-things (IOT) devices because it offers the advantage of having up to date training data to help prevent further anomalies. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month