Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,392

BEHAVIOR DETECTION WITH DETECTION REFINEMENT FOR DETERMINATION OF EMERGING THREATS

Non-Final OA §102§103
Filed
Jun 19, 2023
Examiner
DAY, JASMINE MOCHEN
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Arm Limited
OA Round
3 (Non-Final)
92%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+33.7% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
1.3%
-38.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
35.3%
-4.7% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action The following is a non-final office action in response to communications received February 12, 2026. Claims 1-3, 7, 14-16 and 22 are amended. Claims 4 and 5 are canceled. Claims 1-3 and 6-22 are pending and addressed below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 02/12/2026 has been entered. Response to Amendment Applicant’s amendment and response to the claim 22 is sufficient to overcome the 35 USC 112 (b) rejection set forth in the previous office action. Examiner has withdrawn the rejection under 35 USC 112 (b) as applicant amended the claims. Response to Arguments Applicant’s arguments filed 02/12/2026 have been fully considered but they are not persuasive for the following reasons: Applicant’s arguments with respect to the rejections of amended claims 1 and 15 under 35 U.S.C. 102 (a)(1) have been fully considered but are moot because the new ground of rejection does not rely on any citation applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new ground of rejection under 35 U.S.C.102 (a)(1) is made in view of Carlson et al (US PG-PUB No. 20220335127 A1). (see below rejection details) Therefore, claims 1 and 15 are rejected under 35 U.S.C 102(a)(1). As claims 2-3, 6-14 are dependent directly or indirectly on claim 1; claims 16-22 are dependent directly or indirectly on claim 15, applicant’s argument with respect to the rejections of claim 2-3, 6-14 and 16-22 are moot. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 6-8, 12-13, 15-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Carlson et al (US PG-PUB No. 20220335127 A1). Regarding claim 1 and 15, Carlson teaches a method and a detector, comprising a refinement detection processor coupled to receive precursor alerts from a precursor detector, the refinement detection processor having instructions to: receiving at least two precursor alerts from a precursor detector that detects events from a processing unit, wherein a first alert of the precursor alerts is for a first event of the events, wherein a second alert of the precursor alerts is for a second event of the events, wherein the events represent software behavior of a circuit comprising the processing unit, wherein the precursor detector is part of the circuit, wherein each precursor alert comprises information of a respective precursor event including a score indicating a confidence level in a classification assigned to a respective precursor event; detecting the first event indicating undesirable behavior associated with the processing unit and including a first score that is above a first value; responsive to detecting the first event: setting a timer for a period of time; detecting the second event indicating undesirable behavior associated with the processing unit and including a second score that is above a second value; accumulating, by an accumulator that is part of the circuit, a score update based on the first score and the second score; upon the score update reaching or exceeding a threshold value within the period of time: generating a refined alert indicating the processing unit has been attacked based on the software behavior by correlating at least two events of undesirable behavior occurring within the period of time (Paragraph [0026]: “FIG. 1 is a block diagram of a system 100 (a circuit) that includes illustrative central processing unit (CPU) 110 (processing unit) that includes performance monitoring circuitry 120 (coupling with the machine learning model in the processing unit working as the precursor detector to monitor, track, detect events, and counts the security alerts), control circuitry 130 (coupling with the machine learning model in the processing unit working as the refinement detection processor - accumulator), and machine learning circuitry 140”; Paragraph [0057]: “The performance monitoring circuitry 120 gathers information on one or more system parameters at the kernel (ring 0) level (receiving precursor alerts from a precursor detector that detects events from a processing unit – see below for the at least two precursor alerts). The performance monitoring circuitry 120 includes a first counter to track CPU cache misses (a first accumulator that tracks and detects the CPU cache misses as the first event indicating undesirable software behavior as the first precursor alert) and a second counter to track DTLB load misses (a second accumulator that tracks and detects the DTLB lad misses as the second event indicating undesirable software behavior as the second precursor alert). Using pattern recognition and/or trend analysis, at times enhanced by machine learning, the control circuitry 130 analyzes the CPU cache miss to DTLB load miss ratio 136, recognizes a sharp increase in the ratio indicative of a side-channel exploit attack and generates an alert to the system user or administrator (each precursor alert comprises information of a respective precursor event including a score indicating a confidence level in a classification assigned to a respective precursor event).”; Paragraph [0020]: “A central processing unit (CPU) is provided. The CPU may include: performance monitoring circuitry that includes: first counter circuitry to provide a first value that corresponds to a number of CPU cache misses that occur over each of a plurality of time intervals (detecting the first event indicating undesirable behavior – CPU cache misses associated with the processing unit and including a first score that is above a first value; responsive to detecting the first event: setting a timer for a period of time); and second counter circuitry to provide a second value that corresponds to a number of data translation lookaside buffer (DTLB) load misses that occur over each of the plurality of time intervals (detecting the second event indicating undesirable behavior – DTLB load misses associated with the processing unit and including a second score that is above a second value). The CPU may further include control circuitry (refinement detection processor) to: receive from the performance monitoring circuitry data representative of the first value and data representative of the second value; calculate a CPU cache miss/DTLB load miss ratio based on the first value divided by the sum of the first value and the second value (accumulating and calculating a score update based on the first score and the second score); identify a trend based on the CPU cache miss/DTLB load miss ratio over the plurality of time intervals (upon the score update reaching or exceeding a threshold value – identify a trend, within the period of time); and generate an output indicative of a side channel exploit execution responsive to an identification of a deviation in the trend based on the CPU cache miss/DTLB load miss ratio (generating a refined alert – output, indicating the processing unit has been attacked based on the software behavior by correlating the two events of undesirable behavior occurring within the period of time).”; See paragraph [0021] for the method details; Paragraph [0040]: “the performance monitoring circuitry 120 may generate the interrupt when either or both the CPU cache miss count and/or the DTLB load miss count exceeds one or more user or system configurable count thresholds (upon the score update reaching or exceeding a threshold value within the period of time). The control circuitry 130 determines the CPU cache miss to DTLB load miss ratio 136.sub.1-136.sub.n for each of a plurality of time intervals 138.sub.1-138.sub.n. The control circuitry 130, upon detecting the elevated CPU cache miss to DTLB load miss ratio, the control circuitry 130 may generate an alert indicative of a potential side-channel exploit attack (generating a refined alert indicating the processing unit has been attacked).”). Regarding claim 2 and 16, Carlson teaches all of the features with respect to claim 1 and 15, as outlined above. Carlson further teaches the method and the detector further comprising: upon the score update reaching or exceeding a first threshold value with the period of time, referred to as a first period of time: providing the score update to a second accumulator; setting a second timer for a second period of time; detecting a second event in the precursor alerts indicating undesirable behavior and including a second score that is above a second value; accumulating, by the second accumulator, the score update with scores of detected second events during the second period of time; upon the score update reaching or exceeding a second threshold value within the second period of time: generating the refined alert (Paragraph [0066]: “At 504, the performance monitoring circuitry 120 generates an interrupt responsive to detecting an overflow condition (upon the score update reaching or exceeding a first threshold value with the period of time) in either (or both) the first counter circuitry 122 providing the CPU cache miss counter and/or the second counter circuitry 124 providing the DTLB load miss counter.”; Paragraph [0067]: “At 506, the performance monitoring circuitry 120 maps the interrupt to a process identifier (PID) (referred to as a first period of time).”; Paragraph [0068]: “At 508, the performance monitoring circuitry 120 transfers the data representative of the CPU cache miss count 132 and the DTLB load miss count 134 to control circuitry executing at the user (i.e., ring 3) level.”; Paragraph [0069]: “At 510, the control circuitry 130 may store the received information and/or in temporal buckets or similar data stores and/or data structures that correspond to each of the intervals included in the plurality of temporal intervals”; Paragraph [0070]: “At 512, the control circuitry 130 calculates or otherwise determines a CPU cache miss to DTLB load miss ratio for each of at least some of the plurality of temporal intervals.”; Paragraph [0072]: “At 514, the control circuitry 130 determines whether a deviation in the detected pattern or determined trend of the CPU cache miss to DTLB load miss ratio 136 indicates a side-channel exploit attack. If the control circuitry 130 determines no evidence of a side-channel exploit attack, the method 500 returns to 506, and the control circuitry 130 receives additional CPU cache miss count 132 and DTLB load miss count 134 information from the performance monitoring circuitry 120 (providing the score update to a second accumulator; setting a second timer for a second period of time; detecting a second event in the precursor alerts indicating undesirable behavior and including a second score that is above a second value; accumulating, by the second accumulator, the score update with scores of detected second events during the second period of time). If the control circuitry 130 determines that the deviation in the CPU cache miss to DTLB load miss ratio 136 provides evidence of a side-channel exploit attack, the method 500 continues to 516.”; Paragraph [0073]: “At 516, the control circuitry 130, in response to detecting a deviation indicative of a side-channel exploit attack at 514, generates an output to alert a system user and/or system administrator of the potential side-channel exploit attack (upon the score update reaching or exceeding a second threshold value within the second period of time: generating the refined alert).”). Regarding claim 3, Carlson teaches all of the features with respect to claim 1, as outlined above. Carlson further teaches the method further comprising after detecting the first event in the precursor alerts indicating undesirable behavior and including a first score above the first value, storing the information of the first event (Paragraph [0027]: “The control circuitry 130 receives the first count data (i.e., the CPU cache miss count 132) from the first counter circuitry 122 and the second count data (i.e., the DTLB load miss count 134) from the second counter circuitry 124. In embodiments, the control circuitry 130 may store all or a portion of the received CPU cache miss count 132 and/or the DTLB load miss count 134 in a memory location and/or storage device (storing the information of the first event).”). Regarding claim 6 and 17, Carlson teaches all of the features with respect to claim 1 and 15, as outlined above. Carlson further teaches wherein the precursor detector is a classifier performing real time classification (Paragraph [0019]: “Side-channel exploit attack detection may be enhanced by training, via machine learning (classifier, performing real time classification), the control circuitry using the ratio data and employing one or more models to infer exploit execution in real time”). Regarding claim 7, Carlson teaches all of the features with respect to claim 6, as outlined above. Carlson further teaches wherein the precursor detector includes a software weakness exploit model that receives the events from the processing unit to classify a subset of the events as software weakness exploits and wherein the precursor detector further includes a generic behavioral model that receives the events from the processing unit to classify a subset of the events as suspicious, unknown, or malicious behaviors (Paragraph [0019]: “Side-channel exploit attack detection may be enhanced by training, via machine learning, the control circuitry using the ratio data and employing one or more models to infer exploit execution in real time.”; Paragraph [0057]: “Control circuitry 130 monitors the CPU cache miss to DTLB load miss ratio 136. Using pattern recognition and/or trend analysis, at times enhanced by machine learning, the control circuitry 130 analyzes the CPU cache miss to DTLB load miss ratio 136, recognizes a sharp increase in the ratio indicative of a side-channel exploit attack and generates an alert to the system user or administrator (the precursor detector includes one or more models - a software weakness exploit model and a generic behavioral model, that receives the events from the processing unit to classify events as software weakness exploits or malicious behaviors, on this case for example, could be CPU cache miss or DTLB load miss).”; Paragraph [0036]: “the control circuitry 130 may use at least some of the model representative of a pattern or trend (generic behavioral model) in the CPU cache miss to DTLB load miss ratio to identify, in real-time or near real-time, deviations in the pattern or trend formed by the CPU cache miss to DTLB load miss ratio 136.”). Regarding claim 8, Carlson teaches all of the features with respect to claim 7, as outlined above. Carlson further teaches wherein the software weakness exploit model and the generic behavioral model are each generated using a machine learning (ML) model implemented with neural networks or deep learning techniques (Paragraph [0062]: “In some instances, machine learning circuitry 140 may train the control circuitry 130 in pattern recognition and/or trend analysis using any currently available machine learning technique applicable to pattern recognition and/or trend analysis (the software weakness exploit model and the generic behavioral model are each generated using a machine learning (ML) model). For example, the machine learning circuitry 140 may train the control circuitry 130 in pattern recognition methods using one or more of the following: parametric classification algorithms (linear discriminant analysis, quadratic discriminant analysis, etc.); non-parametric classification algorithms (decision trees, naïve Bayes classifier, neural networks (implemented with neural networks techniques), etc.)”). Regarding claim 12, Carlson teaches all of the features with respect to claim 1, as outlined above. Carlson further teaches wherein the precursor detector is an anomaly detector (Paragraph [0034]: “The control circuitry 130 may include any number and/or combination of currently available and/or future developed electrical components, semiconductor devices, and/or logic elements capable of receiving data representative of a count of CPU cache misses and data representative of a count of DTLB load misses from the performance monitoring circuitry 120, calculating one or more CPU cache miss to DTLB load miss ratios 136.sub.1-136.sub.n for each respective one of at least some of a plurality of temporal intervals 138.sub.1-138.sub.n, detecting deviations or abnormalities in the pattern or trend (the precursor detector is an anomaly detector) of DTLB load miss ratios 136, and generating one or more signals indicative of a potential side-channel exploit attack responsive to detecting a deviation and/or abnormality in the pattern or trend of CPU cache miss to DTLB load miss ratios 136.”). Regarding claim 18, Carlson teaches all of the features with respect to claim 15, as outlined above. Carlson further teaches wherein the refinement detection processor is communicatively coupled to multiple precursor detectors, each precursor detector coupled to a corresponding processing unit (Paragraph [0026]: “ FIG. 1 is a block diagram of a system 100 that includes illustrative central processing unit (CPU) 110 that includes performance monitoring circuitry 120 (including multiple precursor detectors – first counter circuitry 122 and second counter circuitry 124 as stated below, each precursor detector coupled to CPU 110) , control circuitry 130 (refinement detection processor, communicatively coupled to the performance monitoring circuitry 120), and machine learning circuitry 140. The performance monitoring circuitry 120 includes first counter circuitry 122 used to provide the CPU cache miss counter and second counter circuitry 124 used to provide the data translation lookaside buffer (DTLB) load miss counter.”). Regarding claim 13 and 19, Carlson teaches all of the features with respect to claim 8 and 18, as outlined above. Carlson further teaches the method further comprising multiple precursor detectors, each precursor detector including the respective software weakness exploit model and the generic behavioral model that receives events from a respective processing unit (Paragraph [0019]: “Side-channel exploit attack detection may be enhanced by training, via machine learning, the control circuitry using the ratio data and employing one or more models to infer exploit execution in real time.”; Paragraph [0057]: “Control circuitry 130 monitors the CPU cache miss to DTLB load miss ratio 136. Using pattern recognition and/or trend analysis, at times enhanced by machine learning, the control circuitry 130 analyzes the CPU cache miss to DTLB load miss ratio 136, recognizes a sharp increase in the ratio indicative of a side-channel exploit attack and generates an alert to the system user or administrator (the precursor detector includes one or more models - a software weakness exploit model and a generic behavioral model, that receives the events from the processing unit to classify events as software weakness exploits or malicious behaviors, on this case for example, could be CPU cache miss or DTLB load miss).”; Paragraph [0036]: “the control circuitry 130 may use at least some of the model representative of a pattern or trend (generic behavioral model) in the CPU cache miss to DTLB load miss ratio to identify, in real-time or near real-time, deviations in the pattern or trend formed by the CPU cache miss to DTLB load miss ratio 136.”). Regarding claim 20, Carlson teaches all of the features with respect to claim 15, as outlined above. Carlson further teaches wherein the refinement detection processor comprises a state machine, the state machine is used to determine emerging threats (Paragraph [0076]: “Circuitry may comprise … state machine circuitry.”; Paragraph [0078]: “The control circuitry (refinement detection processor, which comprises a state machine circuitry) identifies, determines, and/or detects a pattern or trend in the CPU cache miss to DTLB load miss ratio (determine emerging threats). Upon detecting a deviation from the identified CPU cache miss to DTLB load miss ratio pattern or trend indicative of a potential side-channel exploit attack, the control circuitry generates an output to alert a system user or system administrator.”). Regarding claim 21, Carlson teaches all of the features with respect to claim 1, as outlined above. Carlson further teaches wherein the precursor detector is configured to detect processor events from a processing unit selected from a CPU core, cache, memory controller, or bus monitor (Paragraph [0030]: “The performance monitoring circuitry 120 (precursor detector) may have any number and/or combination of event counters (detect events). In embodiments, the performance monitoring circuitry 120 may include first counter circuitry 122 to monitor, track, and/or count CPU cache misses (the precursor detector is configured to detect processor events from a processing unit selected from a CPU core, cache, memory controller, or bus monitor – In Fig 3, CPU 110, bus monitor 316, memory controller 360) and second counter circuitry 124 to monitor, track, and/or count DTLB load misses.”; Paragraph [0031]: “In other implementations, the performance monitoring circuitry 120 may be provided in whole or in part via one or more processors (from a CPU core), controllers (include memory controller), digital signal processors (DSPs), reduced instruction set computers (RISCs), systems-on-a-chip (SOCs), application specific integrated circuits (ASICs) capable of providing all or a portion of the host CPU 110.”). Regarding claim 22, Carlson teaches all of the features with respect to claim 1, as outlined above. Carlson further teaches wherein a refinement detection processor operates as a second stage, distinct from a first stage performed by the precursor detector, the refinement detection processor accumulating scores from the precursor alerts over a timed window before generating the refined alert (Paragraph [0013]: “Control circuitry (the refinement detection processor – distinct from the performance monitoring circuitry 120 ) within a system CPU receives data representative of the CPU cache miss count and the DTLB load miss count. The control circuitry determines a value representative of a ratio of the CPU cache miss counter to the data translation lookaside buffer (DTLB) load miss counter for each of a plurality of time intervals (over a timed window). The control circuitry detects a pattern or determines a trend in the CPU cache miss to DTLB load miss ratio. Deviations from the detected pattern or determined trend cause the control circuitry to generate output indicative of a potential side-channel exploit attack (the refinement detection processor accumulating scores from the precursor alerts over a timed window before generating the refined alert), such as Spectre or Meltdown.”; Paragraph [0038] further teaches: “The control circuitry 130, upon detecting the elevated CPU cache miss to DTLB load miss ratio, the control circuitry 130 may generate an alert indicative of a potential side-channel exploit attack (generating the refined alert).”; Paragraph [0027] further provide the details of the method.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Carlson et al (US PG-PUB No. 20220335127 A1) in view of McLean (US Patent No. 11522877 B2) . Regarding claim 9, Carlson teaches all of the features with respect to claim 8, as outlined above. Carlson fails to explicitly teach, but McLean teaches the method further comprising, after generating the refined alert, providing the suspicious behavior to a data set for training utilizing the machine learning model to produce an updated data set of known generic behaviors (Cols 7, lines 59: “To train, update, and/or improve the accuracy/efficacy of the machine learning model 20, e.g., initially or if the model 20 is not meeting a prescribed accuracy/efficacy, analysts can develop training sets of labeled data (e.g., one or more features 24 labeled as being malicious or safe) (known generic behaviors data set) and provide the training sets of labeled data to the machine learning model 26. The analysts further can develop testing sets of data (e.g., non-labeled sets (suspicious behavior data set) of indicators that the analysts found to be malicious or safe) and apply the machine learning model 20 to the testing sets of data for determining an accuracy, efficacy, etc., of the machine learning model 20 (providing the suspicious behavior to a data set for training utilizing the machine learning model to produce an updated data set of known generic behaviors)”). Carlson and McLean are both considered to be analogous to the claimed invention because they both teach monitoring and detecting malicious activities or software vulnerabilities. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Carlson with adding providing the anomalous behaviors to a data set for training utilizing the machine learning model to produce an updated data set of known generic behaviors disclosed by McLean. One of the ordinary skills in the art would have been motivated to make this modification in order to improve the accuracy/efficacy of the machine learning model, as suggested by McLean in paragraph Cols 7, lines 59. Regarding claim 10, Carlson teaches all of the features with respect to claim 7, as outlined above. Carlson fails to explicitly teach, but McLean teaches the method further comprising updating the generic behavioral model with the updated training data set of known generic behaviors (Cols 2, lines 17: “In aspects, the attacker learning system can generate and provide performance information for an assessment or evaluation of the machine learning model, and the machine learning model can be trained or updated based on information or data related to the assessment or evaluation thereof.”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Carlson with adding updating the generic behavioral model with the updated training data set of known generic behaviors disclosed by McLean. One of the ordinary skills in the art would have been motivated to make this modification in order to improve the accuracy/efficacy of the machine learning model, as suggested by McLean in paragraph Cols 7, lines 59. Regarding claim 11, Carlson teaches all of the features with respect to claim 7, as outlined above. Carlson fails to explicitly teach, but McLean teaches the method further comprising, after generating the refined alert, providing the unknown behavior to a data set for training utilizing the machine learning model to produce an updated data set of known generic behaviors (Cols 7, lines 59: “To train, update, and/or improve the accuracy/efficacy of the machine learning model 20, e.g., initially or if the model 20 is not meeting a prescribed accuracy/efficacy, analysts can develop training sets of labeled data (e.g., one or more features 24 labeled as being malicious or safe) (known generic behaviors data set) and provide the training sets of labeled data to the machine learning model 26. The analysts further can develop testing sets of data (e.g., non-labeled sets (unknown behavior data set) of indicators that the analysts found to be malicious or safe) and apply the machine learning model 20 to the testing sets of data for determining an accuracy, efficacy, etc., of the machine learning model 20 (providing the unknown behavior to a data set for training utilizing the machine learning model to produce an updated data set of known generic behaviors)”). One of the ordinary skills in the art would have been motivated to make this modification in order to improve the accuracy/efficacy of the machine learning model, as suggested by McLean in paragraph Cols 7, lines 59. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Carlson et al (US PG-PUB No. 20220335127 A1) in view of Kirti (US PG-PUB No. 20230126571 A1). Regarding claim 14, Carlson teaches all of the features with respect to claim 1, as outlined above. Carlson teaches the accumulator. Carlson fails to explicitly teach category weight and linear combination. However, Kirti teaches wherein the accumulating, by the accumulator, is accomplished by a linear combination of the second score and a corresponding category weight of the second event and adding the linear combination to the score update (Paragraph [0014]: “In various examples, risk scores (score) for users categorized as privileged are computed with (linear combination) greater weights (category weight) than are risk scores for non-privileged users.” Paragraph [0163]: “In various examples, the threat detection engine 302 can perform regression analysis on each indicator used to compute a risk score, and/or on the risk score. Regression analysis may include building and updating a linear regression model. The coefficients computed by the regression model could be new or modified weights that would replace the initial weights for computing the risk score (linear combination of the score and a category weight).”). Carlson and Kirti are both considered to be analogous to the claimed invention because they both teach monitoring and detecting anomalous activities in a computer environment. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Carlson with adding a linear combination of the score and a category weight disclosed by Kirti. One of the ordinary skills in the art would have been motivated to make this modification in order to provide greater accuracy, as suggested by Kirti in paragraph [0163]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (see form PTO-892 for details) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASMINE DAY whose telephone number is (571)272-0204. The examiner can normally be reached Monday - Friday 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.D./ Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Jun 19, 2023
Application Filed
Jul 02, 2025
Non-Final Rejection — §102, §103
Oct 09, 2025
Response Filed
Dec 09, 2025
Final Rejection — §102, §103
Feb 12, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585816
SYSTEMS AND METHODS FOR SELECTIVE ENCRYPTION OF SENSITIVE IMAGE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572741
DETERMINING LINKED SPAM CONTENT
2y 5m to grant Granted Mar 10, 2026
Patent 12554839
APPLICATION DISCOVERY ENGINE IN A SECURITY MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12541599
VALIDATION AND RECOVERY OF OPERATING SYSTEM BOOT FILES DURING OS UPGRADE OPERATIONS FOR UEFI SECURE BOOT SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Patent 12524574
DEFENSE AGAINST XAI ADVERSARIAL ATTACKS BY DETECTION OF COMPUTATIONAL RESOURCE FOOTPRINTS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month