Prosecution Insights
Last updated: April 19, 2026
Application No. 17/494,156

Systems and Methods for Adaptive Network Security Based on Unsupervised Behavioral Modeling

Non-Final OA §103
Filed
Oct 05, 2021
Examiner
HAILU, TESHOME
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Drnc Holdings Inc.
OA Round
7 (Non-Final)
78%
Grant Probability
Favorable
7-8
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
543 granted / 698 resolved
+19.8% vs TC avg
Strong +24% interview lift
Without
With
+23.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
721
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 698 resolved cases

Office Action

§103
DETAILED ACTION This office action is in reply to applicant communication filed on January 30, 2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 30, 2026. Claims 5, 7 and 17-19 have been cancelled. Claims 1-4, 6, 8-16 and 20-25 have been presented. Claims 1-4, 6, 8-16 and 20-25 are pending. Response to Argument Applicant’s arguments filed on January 30, 2026 with respect to the 35 U.S.C. rejections have been fully considered but are moot in view of new ground(s) of rejection. Applicant’s argues that the prior art on record fails to teach the limitation, “Wherein the rate limiting rule dynamically changes based o at least one of modeled or expected network traffic levels determined from the trained model”. However, upon further consideration, a new ground(s) of rejection is made using the newly find prior arts to Assarpour (US Pub. No. 2013/0250763). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 6, 8-14, 16, 20-22 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Bernstein (US Pub. No. 2016/0142435) in view of Bono (US Pub. No. 2010/0180333) further in view of Byron (US Pub. No. 2022/0053024) and further in view of Assarpour (US Pub. No. 2013/0250763). As per clam 1 Bernstein discloses: A method comprising: evaluating network data to determine a threat risk demonstrated by the network data, (paragraph 74 of Bernstein, the systems and/or methods described herein improving functionality of a computer network (and optionally clients and/or servers of the network) by improving security to prevent shut-downs, tying up of network resources, and/or degradation of network performance. The diversity values may be analyzed to detect security threats which cause the shut-down, use of network resources, and/or degradation of performance). Wherein evaluating the network data comprises analyzing a set of network parameters associated with the network data using at least one trained model to identify deviation between the set of network parameters and expected network parameters, (paragraph 53 of Bernstein, the diversity value is analyzed to determine whether the new network activity is normal or abnormal. Optionally, an abnormality score is calculated based on the diversity value, the analysis being performed based on the abnormality score) and (paragraph 192 of Bernstein, a set of rules is applied to the maximum and minimum scores, to classify the received network activity as normal or anomalous) and (paragraph 216 of Bernstein, an example of the score function to calculate the maximum and minimum abnormality scores is represented by the relationship: Scoremin=1/b{circumflex over (D)}max and Scoremax=1/b{circumflex over (D)}min [0217] b is a parameter, b>1) and (paragraph 251 of Bernstein, the diversity values are arranged as a diversity time series based on chronological order of the related times slices. The diversity time series may be included within the trained model). The threat risk being indicative of an extent of deviation between the set of network parameters and the expected network parameters, (paragraph 179 of Bernstein, an abnormality score is calculated for the activity word. the abnormality score represents the extent to which the new network activity deviates from normal allowed behavior. The abnormality score is calculated based on the method described with reference to FIGS. 5A and/or 5B, which is a computer implemented method for calculation of the abnormality score, in accordance with some embodiments of the present invention. The abnormality score is calculated based on the learned network behavior model representing normal network activity), The at least one trained model having been trained with data indicative of the expected network parameters; (paragraph 42 of Bernstein, an aspect of some embodiments of the present invention relates to systems and methods for learning normal network behavior based on an analysis of data access events between network entities, optionally in real time. Examples of network entity types include source machine (i.e., the machine from which network traffic originated), target machine (i.e., the machine to which network traffic is destined), source user (i.e., the connected user on the source machine), and target user (i.e., the user with which actions on target machine are performed)), In response to the threat risk satisfying at least one criterion, having a particular action from a plurality of actions to mitigate the threat risk. (Paragraph 232 of Bernstein, when the activity is identified as being related to anomalous behavior, an alarm and/or other event is sent to the originating client, a management server, and/or other controller which may take further action to investigate and/or prevent further malicious activity). Bernstein teaches the method of having a plurality of actions to mitigate the threat risk (see paragraphs 232 of Bernstein) but fails to clearly disclose: Selecting a particular action from a plurality of action and wherein the selected particular action is the implementation of a rate limiting rule that prevents one or more computing devices from sending more than a threshold number of network communications in a given interval. However, in the same field of endeavor, Bono teaches this limitation as, (paragraph 59 of Bono, the communication is allowed upon successful completion of the challenge (block 416). For example, upon successfully completion of a HIP challenge the communication may be communicated to the recipients. In contrast, the communication is blocked when the challenge is not successfully completed (block 418). In additional examples, the challenge may be a limitation imposed on the sender and/or on the communication. Examples of limitation may include scanning the content forming the communication for malware, limiting the number of recipients that may be targeted, delaying the communication, and so on. Other limitations may include limiting a number of communications that may be sent within a time period, limiting a number of recipients that may be targeted, and so on. In other instances, the communication may be blocked when the check indicates that the reputation level for the communication does not meet the specified reputation level. A variety of other examples are also contemplated without departing from the spirit and scope thereof). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein and include the above limitation using the teaching of Bono in order to secure the computing system by preventing a communication abuse using the implemented rule of the network (see abstract and paragraph 59 of Bono). The combination of Bernstein and Bono teaches the method of analyzing a set of network parameters associated the network data using at least one trained model (see paragraphs 53, 192 and 216 of Bernstein) but fails to clearly disclose: Inputting a set of network parameter associated with network data to at least one trained model and receiving from the at least one trained model information indicative of the detection of threat. However, in the same field of endeavor, Byron teaches this limitation as, (paragraph 6 of Byron, the training module may be adapted to receive transmitted log information from a plurality of edge nodes, apply a rule-based algorithm to the transmitted log information to categorize a first batch of data as included in a security analysis, a second batch of data as excluded from the security analysis, and a third batch of data as actually reviewed in the security analysis based on a user selection, train a classifier based on outcomes of the rule-based algorithm, convert the classifier to run as a trained model executable on the plurality of nodes, and transmit the trained model executable to the plurality of edge nodes. The agent may be adapted to receive the trained model executable; assign a priority score to a plurality of records using the trained model executable, receive a first pulse from a collector, select a first set of records for transmission based at least in part on the priority score and on the first pulse from the collector, and transmit the first set of records to the collector). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein and Bono to include the above limitation using the teaching of Byron in order to secure computing distribution system by limiting transmission of logging information in distributed system and reduce congestion (see paragraph 18 of Byron). The combination of Bernstein, Bono and Byron teaches the method of analyzing a set of network parameters associated the network data using at least one trained model (see paragraphs 53, 192 and 216 of Bernstein) but fails to clearly disclose: Wherein the rate limiting rule dynamically changes based on at least one of modeled or expected network traffic levels determined from the trained model. However, in the same field of endeavor, Assarpour teaches this limitation as, (paragraph 14 of Assarpour, the meters are adjusted as ports are put into service or removed from service, and as services are applied to ports. In one embodiment the meters are implemented both on a per protocol and per port basis. As additional ports are activated and as a protocol is enabled on additional ports, the meters associated with the protocol are increased to accommodate increased amounts of control traffic at the network element. As ports are deactivated, the filters are likewise dynamically adjusted downwards to account for lower expected control volume from the fewer number of ports) and (paragraph 34 of Assarpour, a user may set policy to be applied in connection with adjusting the meters. The policy, in this context, implies the ability of a user such as a network manager to specify the granularity of the protocol meters and expected traffic metrics, such as an expected number of control packets per active protocol instance per port in a given time period. Given the specified policy, however, the dynamic nature of the meter creation and adjustment enables the network element to apply the specified policy to adjust the meters as the configuration of the network element changes without requiring specific intervention by the network manager in connection with reconfiguration of the network element) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein, Bono and Byron to include the above limitation using the teaching of Assarpour in order to dynamically adjust the meters of network to prevent overload of the communication of the network system (see paragraph 35 of Assarpour). Claims 16 and 20 are rejected under the same reason set forth in rejection of claim 1: As per claim 2 Bernstein in view of Bono and further in view of Byron by discloses: The method of claim 1 wherein: selecting the particular action from the plurality of actions further comprises: classifying a type of attack associated with the deviation from the expected network parameters based on one or more parameters of the set of network parameters; and slecting wherein each respective diversity value represents a certain relationship between at least one network entity and at least one network entity type; classify the at least one network activity as anomalous or normal based on a calculated abnormality score; and generating an alert when the at least one network activity is classified as anomalous). As per claim 3 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, wherein: the data indicative of the expected network parameters, that was used to train the at least one trained model, is associated with a first plurality of computing devices, the network data is associated with a second plurality of computing devices different than the first plurality of computing devices, and the method further comprises performing the particular action with respect to communications issued by one or more devices of the second plurality of computing devices. (paragraph 21 of Bernstein, an anomaly detecting server in communication with the network, the server configured to: receive data representing at least one network activity within the network, each network activity representing a certain data access event occurring between certain network entities in the network; calculate an abnormality score for the received at least one network activity based on a retrieved at least one relevant diversity value, the at least one relevant diversity value obtained by extracting from the data representing each respective network activity, the certain network entities involved in the respective network activity, and retrieving the at least one relevant diversity value from a network behavior model based on the extracted certain network entities, wherein the network behavior model includes at least one diversity value, wherein each respective diversity value represents a certain relationship between at least one network entity and at least one network entity type; classify the at least one network activity as anomalous or normal based on a calculated abnormality score; and generating an alert when the at least one network activity is classified as anomalous). As per claim 6 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, further comprising: training the at least one trained model with the data indicative of the expected network parameters and with second data indicative of at least one type of network attack. (Paragraph 7 of Bernstein, calculating the abnormality score and classifying the at least one network activity comprises: calculating a first abnormality score using a first combination of relevant diversity values; calculating a second abnormality score using a second combination of relevant diversity values; designating a lower of the first and the second abnormality scores as a minimum score, and designating a higher of the first and the second abnormality scores as maximum score; and at least one member of the group consisting of: classifying the at least one received network activity as normal when the maximum score is below a predefined threshold, classifying the at least one received network activity as anomalous when the minimum score is above the predefined threshold, classifying the at least one received network activity as normal when the average of the minimum and the maximum score is below the threshold, and classifying the at least one received network activity as anomalous when the average of the minimum score and the maximum score is above the predefined threshold). As per claim 8 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, further comprising performing the particular action, wherein performing the particular action comprises: defining a network security rule based on the expected network parameters for the set of network parameters; and configuring a computing device at a destination at least some of the network data with the network security rule. (Paragraph 192 of Bernstein, a set of rules is applied to the maximum and minimum scores, to classify the received network activity as normal or anomalous). As per claim 9 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 8, wherein the network security rule comprises one or more of: a rate limiting rule that prevents one or more computing devices from sending more than a threshold number of network communications in a given interval, an interarrival rule that prevents one or more computing devices from repeating a pattern of network communication, or a cardinality rule that prevents one or more computing devices from sending network communications with one or more values for one or more parameters of the set of network parameters. (Paragraph 74 of Bernstein, the diversity values may be analyzed to detect security threats which cause the shut-down, use of network resources, and/or degradation of performance. For example, abnormal activity that shuts down computers is reduced or prevented, abnormal activity that ties us network resources such as malicious code accessing bandwidth to repeatedly attempt to break into a host is reduced or prevented, and/or degradation of network performance due to malicious code infected computers is reduced or prevented) As per claim 10 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1 wherein selecting the particular action from the plurality of actions comprises: selecting a first action from the plurality of actions as the particular action in response to the threat risk being less than a threshold; and selecting a second action from the plurality of actions as the particular action in response to the threat risk being greater than the threshold. (Paragraph 7 of Bernstein, calculating the abnormality score and classifying the at least one network activity comprises: calculating a first abnormality score using a first combination of relevant diversity values; calculating a second abnormality score using a second combination of relevant diversity values; designating a lower of the first and the second abnormality scores as a minimum score, and designating a higher of the first and the second abnormality scores as maximum score; and at least one member of the group consisting of: classifying the at least one received network activity as normal when the maximum score is below a predefined threshold, classifying the at least one received network activity as anomalous when the minimum score is above the predefined threshold, classifying the at least one received network activity as normal when the average of the minimum and the maximum score is below the threshold, and classifying the at least one received network activity as anomalous when the average of the minimum score and the maximum score is above the predefined threshold). As per claim 11 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 10, wherein: the first action comprises generating alert to a user of the network data that deviates from the expected network parameters, and the second action comprises triggering block of network communications that deviate from the expected network parameters. (Paragraph 21 of Bernstein, generating an alert when the at least one network activity is classified as anomalous) and (paragraph 76 of Bernstein, in network performance based on detection and blocking of the abnormal activity by analyzing diversity values include: improved memory usage by removal of the improper code, improvement in network throughput, latency and/or reliability by blocking of improper network connection access). As per claim 12 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 10, wherein: the second action alters a flow of network data subject to the second action, and the first action does not alter the flow of network data subject to the first action.(Paragraph 21 of Bernstein, generating an alert when the at least one network activity is classified as anomalous) and (paragraph 76 of Bernstein, in network performance based on detection and blocking of the abnormal activity by analyzing diversity values include: improved memory usage by removal of the improper code, improvement in network throughput, latency and/or reliability by blocking of improper network connection access). As per claim 13 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, further comprising performing the particular action, wherein performing the particular action comprises: applying the particular action to subsequent network data originating from acalculating the abnormality score and classifying the at least one network activity comprises: calculating a first abnormality score using a first combination of relevant diversity values; calculating a second abnormality score using a second combination of relevant diversity values; designating a lower of the first and the second abnormality scores as a minimum score, and designating a higher of the first and the second abnormality scores as maximum score; and at least one member of the group consisting of: classifying the at least one received network activity as normal when the maximum score is below a predefined threshold, classifying the at least one received network activity as anomalous when the minimum score is above the predefined threshold, classifying the at least one received network activity as normal when the average of the minimum and the maximum score is below the threshold, and classifying the at least one received network activity as anomalous when the average of the minimum score and the maximum score is above the predefined threshold) As per claim 14 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, further comprising performing the particular action, wherein performing the particular action comprises: applying the particular action to network data comprising particular content. (Paragraph 7 of Bernstein, calculating the abnormality score and classifying the at least one network activity comprises: calculating a first abnormality score using a first combination of relevant diversity values; calculating a second abnormality score using a second combination of relevant diversity values; designating a lower of the first and the second abnormality scores as a minimum score, and designating a higher of the first and the second abnormality scores as maximum score; and at least one member of the group consisting of: classifying the at least one received network activity as normal when the maximum score is below a predefined threshold, classifying the at least one received network activity as anomalous when the minimum score is above the predefined threshold, classifying the at least one received network activity as normal when the average of the minimum and the maximum score is below the threshold, and classifying the at least one received network activity as anomalous when the average of the minimum score and the maximum score is above the predefined threshold). As per claim 21 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, further comprising: establishing a threat risk threshold based on an analysis of at least one log of compiled network data, wherein the threat risk threshold is associated with a particular threat risk level. (Paragraph 177 of Bernstein, received network activities translated into words are analyzed to determine when the respective activity word already exists within the network behavior model. The analysis may be performed by an activity record analysis module 206C stored on in communication with anomaly detecting server 208 and/or learning server 204, for example, by looking up the respective activity word in a dataset of existing words to determine whether the word is present in the dataset or not). As per claim 22 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 21, further comprising: determining whether the threat risk satisfies the at least one criterion, wherein determining whether the threat risk satisfies the at least one criterion comprises comparing the threat risk associated with the network data to the threat risk threshold to identify an attack type. (Paragraph 224 of Bernstein, the abnormality score is compared against a predefined threshold to determine whether the activity is related to anomalous behavior or normal behavior). As per claim 24 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, wherein selecting the particular action comprises: configuring the particular action based on an evaluation of values associated with the threat risk. (Paragraph 205 of Bernstein, the abnormality score is calculated from the unified diversity value based on a function that increases the abnormality score when the unified diversity value decreases, and/or decreases the abnormality score when the unified diversity value increases). As per claim 25 Bernstein in view of Bono further in view of Byron and further in view of Assarpour discloses: The method of claim 1, wherein the particular action comprises: triggering at least one modification of the network data to mitigate the threat risk. (Paragraph 92 of Bernstein, network activity from network 202 is received by learning server 204 to update model 206B with new normal behavior and/or changes in normal behavior. Alternatively or additionally, network activity from network 202 is received by anomaly detecting server 208 to identify anomalous behavior). Claims 4, 15 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Bernstein (US Pub. No. 2016/0142435) in view of Bono (US Pub. No. 2010/0180333) further in view of Byron (US Pub. No. 2022/0053024) further in view of Assarpour (US Pub. No. 2013/0250763) and further Burns (US Pub. No. 2016/0248805). As per claim 4: The combination of Bernstein, Bono, Byron and Assarpour teaches the method of having a plurality of actions to mitigate the threat risk (see paragraphs 232 of Bernstein) but fails to clearly disclose: The method of claim 1, wherein the particular action comprises: triggering a verification a computing device that communicated one or more messages that are within the network data and have one or more parameters of the set of network parameters deviating from the expected network parameters. However, in the same field of endeavor, Burns teaches this limitation as, (Paragraph 27 of Burns, the host verification unit 112 is configured to determine whether a host such as a particular computer 104, 106, 108 in the network 102 is a verified computer. “Verified,” in this context, means that the subject computer has undergone a series of testing and configuration steps that provide a level of confidence with respect to the security software that is installed on that subject computer. For example, a verified computer may be one that is known to have anti-virus software installed of a particular type and with a particular set of updates, and that has been recently scanned to confirm the absence of malware or viruses. Verification also could include determining that required software patches for an operating system or applications have been installed). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein, Bono, Byron and Assarpour to include the above limitation using the teaching of Burns in order to secure the computing system by verifying the computing device using a series of testing and configuration steps (see paragraph 27 of Burns). As per claim 15: The combination of Bernstein, Bono and Byron teaches the method of having a plurality of actions to mitigate the threat risk (see paragraphs 232 of Bernstein) but fails to clearly disclose: The method of claim 14, wherein performing the particular action comprises: generating a user interface that presents one or more of the network data as anomalous behavior, and that presents a selectable element for activating a restriction against network data exhibiting the anomalous behavior; and activating the restriction in response to user selection of the selectable element. However, in the same field of endeavor, Burns teaches this limitation as, (Paragraph 54 of Burns, FIG. 3 illustrates an example graphical user interface that may be provided in an embodiment. In one embodiment, the threat scoring unit 120 may be configured to generate and display a screen display 302 in the form of an HTML document for display using a browser on display unit 105. In an embodiment, screen display 302 comprises a threat score indicator 304, actions table 306, user table 308, attack table 310, machine table 312, map 314, and recommendation region 316). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein, Bono and Byron to include the above limitation using the teaching of Burns in order to secure the computing system by displaying information to the device/user and receiving response from the user/device (see paragraph 54 of Burns). As per claim 23: The combination of Bernstein, Bono, Byron and Assarpour teaches the method of having a plurality of actions to mitigate the threat risk (see paragraphs 232 of Bernstein) but fails to clearly disclose: The method of claim 1, further comprising: in response to determining that communications from a computing device match the expected network parameters, disabling a verification action for the computing device. However, in the same field of endeavor, Burns teaches this limitation as, (Paragraph 27 of Burns, the host verification unit 112 is configured to determine whether a host such as a particular computer 104, 106, 108 in the network 102 is a verified computer. “Verified,” in this context, means that the subject computer has undergone a series of testing and configuration steps that provide a level of confidence with respect to the security software that is installed on that subject computer. For example, a verified computer may be one that is known to have anti-virus software installed of a particular type and with a particular set of updates, and that has been recently scanned to confirm the absence of malware or viruses. Verification also could include determining that required software patches for an operating system or applications have been installed. Data indicating that particular computers are verified may be stored in tables in database 140 and the host verification unit 112 may determine whether a host is verified by sending a query to the database. In some cases, unverified hosts may be treated differently than verified hosts; for example, the automated remediation techniques described herein might be applied only to verified hosts). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Bernstein, Bono, Byron and Assarpour to include the above limitation using the teaching of Burns in order to secure the computing system by verifying the computing device using a series of testing and configuration steps (see paragraph 27 of Burns). Conclusion The prior art made or record and not relied upon is considered pertinent to applicant’s disclosure is Mitomo (US Pub. No. 2007/0011745). Mitomo discloses: A computer-readable recording medium recording a worm detection parameter setting program for setting an appropriate worm detection parameter for target environments. When a log reader loads a communication log created within a prescribed time period, a log classifier classifies the entries of the communication log into categories based on communication contents. A frequency distribution creator analyzes the entries of a category, counts the number of appearance of each worm detection parameter value for each object of a preset network unit, and creates frequency distribution information. A threshold derivation unit analyzes the frequency distribution information and derives a threshold value that is used for determining whether a worm is propagating. An output unit outputs to an output device the threshold value for the worm detection parameter for the category, together with the frequency distribution information created by the frequency distribution creator, thereby providing a user with the information). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TESHOME HAILU whose telephone number is (571)270-3159. The examiner can normally be reached M-F 8 a.m. - 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TESHOME HAILU/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Oct 05, 2021
Application Filed
Dec 15, 2023
Non-Final Rejection — §103
Mar 26, 2024
Response Filed
Jun 20, 2024
Final Rejection — §103
Oct 28, 2024
Response after Non-Final Action
Nov 07, 2024
Request for Continued Examination
Nov 12, 2024
Response after Non-Final Action
Nov 25, 2024
Non-Final Rejection — §103
Feb 28, 2025
Response Filed
Mar 13, 2025
Final Rejection — §103
Jun 18, 2025
Request for Continued Examination
Jun 22, 2025
Response after Non-Final Action
Jun 26, 2025
Non-Final Rejection — §103
Sep 29, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 02, 2025
Response after Non-Final Action
Jan 30, 2026
Request for Continued Examination
Feb 05, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602464
PERIPHERAL DEVICE SANDBOX
2y 5m to grant Granted Apr 14, 2026
Patent 12598214
PROCESSING AUTHENTICATION REQUESTS FOR UNIFIED ACCESS MANAGEMENT SYSTEMS AND APPLICATIONS USING FREQUENTLY INVOKED POLICIES
2y 5m to grant Granted Apr 07, 2026
Patent 12598217
Analyzing Cloud-Based Services for Compliance with Multiple Regulations
2y 5m to grant Granted Apr 07, 2026
Patent 12587372
SINGLE REQUEST ARCHITECTURE FOR INCREASING EFFICIENCY OF SECURE MULTI-PARTY COMPUTATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12580947
BROWSER SECURITY VIA DOCUMENT OBJECT MODEL MANIPULATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+23.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 698 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month