Prosecution Insights
Last updated: April 19, 2026
Application No. 18/529,714

Method for Protecting an Embedded Machine Learning Model

Final Rejection §102§103
Filed
Dec 05, 2023
Examiner
HERZOG, MADHURI R
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
516 granted / 662 resolved
+19.9% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
697
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 662 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Final Office action in response to communications received on 10/20/2025. Response to Amendment Claims 3 and 4 have been cancelled. Claims 1, 2, and 5-19 have been examined. Claims 1, 5-10, 13-15, and 18 have been amended. The rejection of claim 10 under 35 U.S.C 101 is withdrawn in light of the applicant’s amendments to the claim. The rejection of claim 15 under 35 U.S.C 112 is withdrawn in light of the applicant’s amendments to the claim. Applicant's arguments filed 10/20/2025 have been fully considered but they are not persuasive. As per the applicant’s arguments that prior art of record Schorn does not teach: “wherein the feature activation is determined by a dimensional reduction of the at least one output”, the examiner respectfully disagrees. According to the published specification of the instant application: “[0018]: the feature activation preferably being determined by a dimensional reduction, preferably summation, of the at least one output. [0047]: perform a dimensional reduction 120 illustrated in FIG. 1 thereon (e.g., summation). [0043]: An overview of the method 100 according to exemplary embodiments of the disclosure is shown in FIG. 1. Based on the intermediate results 210 of the monitored machine learning model 200, preferably DNNs, a feature activation fact, which can be a vector or a multi-dimensional tensor, can be calculated. These tensors can be generated from an input example. In the case of 2D filters (convolution), which are usually used in DNNs for image classification, the output from an intermediate layer consists of a plurality of 2D feature maps that correspond to the various filter kernels of the layer. The term “feature map” is also referred to as a feature map in the context of the disclosure. For each feature map, a single value can be appended to fact t by adding up all the values of the feature map, i.e., according to the specification of the instant application, the dimensional reduction comprises of, for each feature map, adding up all the values of the feature map and appending a single value to fact. Prior art of record Schorn teaches: Page 3, left column: A. Computing Feature Activations: As shown in Fig. 1, the input to FACER is a vector fact in which the neuron outputs of all layers of the DNN are concatenated. These outputs are generated from a single input sample given to the DNN. In the case of 2D convolutional layers, which are commonly used in image classification DNNs, the layer output consists of multiple 2D feature maps corresponding to the different filter kernels of the layer. For each feature map we append a single value to fact by summation over all values of the feature map. The benefit of accumulation over feature maps is twofold. Secondly, accumulation results in a comparatively low-dimensional feature activation representation, i.e., Schorn teaches the same method of dimensional reduction as recited in the specification of the instant application – for each feature map, adding all the values of the feature map and appending a single value to fact. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 1, 2, 5, 9-11, 13-15, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by prior art of record FACER: A Universal Framework for Detecting Anomalous Operation of Deep Neural Networks by Schorn et al (hereinafter Schorn). As per claim 1, Schorn teaches: A method for protecting an embedded machine learning model from at least one physical attack, comprising: ascertaining a monitoring input, wherein the monitoring input is based on at least one intermediate result from the embedded machine learning model (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded machine learning model). Page 2, right column: last paragraph: An overview of FACER’s basic principle is depicted in Fig. 1. Based on the supervised DNN’s intermediate outputs, a feature activation vector fact is computed. Page 3, left column: A. Computing Feature Activations: As shown in Fig. 1, the input to FACER is a vector fact in which the neuron outputs of all layers of the DNN are concatenated); evaluating the ascertained monitoring input by way of a monitoring system; and detecting the at least one physical attack based on the basis of the evaluation (Schorn: page 2, right column: last paragraph and page 3, left column: first paragraph: FACER then performs a binary classification into anomaly and non-anomaly based on fact and a set of learned weights for the respective anomaly type. FACER can be trained on detecting various types of anomalies, such as random bit-flips in the DNN computing hardware, as well as OOD samples with noise or unseen classes. Page 2, left column: Hardware Failure: For example, high energy particle strikes (such as an Electromagnetic pulse – a physical attack) can result in bit-flip errors. Adversarial Attacks. Other fields of anomalies can be associated with security threats. Attackers can corrupt data and computation. One example are adversarial attacks that fool a neural network by adding small distortions to the input data. Physical attacks also play a role), wherein the monitoring system comprises a further machine learning model which is configured to perform the evaluation of the ascertained monitoring input, and wherein the further machine learning model comprises fewer neurons than the embedded machine learning model being protected (Schorn: Page 3, left column: B. Training the FACER Classifier: While FACER can in principle use any trainable binary classifier, we decided to utilize a small feedforward neural network that takes fact as input and classifies it into anomaly or non-anomaly. An architecture with two hidden layers, each with 64 neurons and rectified linear unit (ReLU) activation functions, and an output layer with a single sigmoid neuron has proven to work well for our purposes. Page 3, right column and page 4, left column: A. Preliminary Remarks: Throughout our experiments we use three image classification DNNs trained on the CIFAR-10 [20], CIFAR-100 [20], and SVHN [27] tasks respectively. The DNNs are based on the DenseNet [15] architecture with a depth of 40 and growth rate of 12. They have approximately 600k trainable parameters), wherein the embedded machine learning model is a neural network (Schorn: Fig. 1: Deep Neural Network (DNN)), wherein the at least one intermediate result comprises at least one output from an intermediate layer of the neural network, wherein ascertaining the monitoring input comprises determining a feature activation as an activation vector, wherein the feature activation is determined by a dimensional reduction of the at least one output, and wherein the feature activation is used as an input for the further machine learning model of the monitoring system (Schorn: Page 2, right column: last paragraph: An overview of FACER’s basic principle is depicted in Fig. 1. Based on the supervised DNN’s intermediate outputs, a feature activation vector fact is computed. Page 3, left column: A. Computing Feature Activations: As shown in Fig. 1, the input to FACER is a vector fact in which the neuron outputs of all layers of the DNN are concatenated. In the case of 2D convolutional layers, which are commonly used in image classification DNNs, the layer output consists of multiple 2D feature maps corresponding to the different filter kernels of the layer. For each feature map we append a single value to fact by summation over all values of the feature map). As per claim 2, Schorn teaches: The method according to claim 1, wherein the further machine learning model is designed as an embedded neural network (Schorn: Page 3, left column: B. Training the FACER Classifier: While FACER can in principle use any trainable binary classifier, we decided to utilize a small feedforward neural network that takes fact as input and classifies it into anomaly or non-anomaly). As per claim 5, Schorn teaches: The method according to claim 1, wherein: the respective output from the intermediate layer comprises a plurality of feature cards, the dimensional reduction for the respective output comprises calculating a value for each of the feature cards which is specific to an entire feature card in question, and the feature activation comprises the calculated values (Schorn: Page 3, left column: A. Computing Feature Activations: As shown in Fig. 1, the input to FACER is a vector fact in which the neuron outputs of all layers of the DNN are concatenated. In the case of 2D convolutional layers, which are commonly used in image classification DNNs, the layer output consists of multiple 2D feature maps corresponding to the different filter kernels of the layer. For each feature map we append a single value to fact by summation over all values of the feature map). As per claim 9, Schorn teaches: The method according to claim 1, wherein: the at least one physical attack is detected as a physical intrusion on an embedded system, and the embedded machine learning model is executed on the embedded system (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded system). Page 2, left column: Hardware Failure: For example, high energy particle strikes (such as an Electromagnetic pulse – a physical attack) can result in bit-flip errors. Page 2, right column, last paragraph and page 3, left column, first 2 lines: FACER can be trained on detecting various types of anomalies, such as random bit-flips in the DNN computing hardware). As per claim 10, Schorn teaches: The method according to claim 1, wherein a computer program comprises instructions which, when the computer program is executed by a computer, prompt the computer to perform the method (see claim 1). As per claim 11, Schorn teaches: A device for data processing which is configured to perform the method according to claim 1 (see claim 1). As per claim 13, Schorn teaches: The method according to claim 1, wherein the embedded machine learning model is configured as a deep neural network (Schorn: Fig. 1: Deep Neural Network (DNN)). As per claim 14, Schorn teaches: The method according to claim 1, wherein the dimensional reduction is a summation (Schorn: Page 3, left column: A. Computing Feature Activations: In the case of 2D convolutional layers, which are commonly used in image classification DNNs, the layer output consists of multiple 2D feature maps corresponding to the different filter kernels of the layer. For each feature map we append a single value to fact by summation over all values of the feature map). As per claim 15, Schorn teaches: The method according to claim 5, wherein the value is a total value (Schorn: Page 3, left column: A. Computing Feature Activations: In the case of 2D convolutional layers, which are commonly used in image classification DNNs, the layer output consists of multiple 2D feature maps corresponding to the different filter kernels of the layer. For each feature map we append a single value to fact by summation over all values (total value) of the feature map). As per claim 19, Schorn teaches: The method according to claim 1, wherein: the at least one physical attack is detected as a physical intrusion on an embedded system, and the embedded machine learning model and the monitoring system are executed on the embedded system (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded system). Page 1: left column, last paragraph: Our approach for anomaly detection is based on a trainable feature activation consistency checker (FACER), receiving traces from the intermediate outputs of a main DNN as input. Fig. 1 depicts our approach. Anomalies in the input lead to inconsistencies in the feature representation which can be detected by FACER. As seen in fig. 1, FACER is also implemented on the same embedded system. Page 2, left column: Hardware Failure: For example, high energy particle strikes (such as an Electromagnetic pulse – a physical attack) can result in bit-flip errors. Page 2, right column, last paragraph and page 3, left column, first 2 lines: FACER can be trained on detecting various types of anomalies, such as random bit-flips in the DNN computing hardware). Claim 6, 16, and 17 is rejected under 35 U.S.C. 103 as being unpatentable over Schorn and prior art of record Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators by Schorn et al (hereinafter Schorn2). As per claim 6, Schorn teaches: The method according to claim 1, further comprising: detecting a fault during an execution of the embedded machine learning model based on of the evaluation; and detecting an abnormality in the execution of the embedded machine learning model based on the evaluation (Schorn: Page 2, left column: Hardware Failure: For example, high energy particle strikes (such as an Electromagnetic pulse – a physical attack) can result in bit-flip errors (fault). page 2, right column and page 3, left column: III. FACER: FEATURE ACTIVATION CONSISTENCY CHECKER: FACER can be trained on detecting various types of anomalies, such as random bit-flips (fault) in the DNN computing hardware [30], as well as OOD samples with noise or unseen classes. This makes it universally applicable as a consistency checker for monitoring safety-critical neural network applications. Page 6, left column: V. CONCLUSIONS: With FACER we propose a versatile and efficient framework for detecting multiple types of anomalous DNN operation modes (abnormal execution of the DNN)). Schorn does not teach: providing a corrected output from the embedded machine learning model. However, Schorn2 teaches: providing a corrected output from the embedded machine learning model (Shorn2: page 211, last paragraph and page 212, first 3 lines: This is why we choose to employ a small feed-forward neural network for detecting critical errors in the feature activation traces of the CNN that performs the actual classification task. As shown in Fig. 4, the network is designed to predict both, if a critical error is present or not, as well as a corrected task result for the image classifier. The correction prediction output layer has as many output neurons as the image classifier CNN, with a softmax activation function to assign probabilities to each of the possible image classes. An argmax function is used to select the predicted class based on the output neuron that gives the highest probability score. The correction output is only taken, if the detector indicates that a critical error was detected). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Schorn2 in the invention of Schorn to include the above limitations. The motivation to do so would be to recover the correct output (Schorn2: page 206: 2nd paragraph). As per claim 16, Schorn in view of Schorn2 teaches: The method according to claim 6, wherein the fault is a bit error (Schorn: Page 2, right column, last paragraph and page 3, left column, first 2 lines: FACER can be trained on detecting various types of anomalies, such as random bit-flips (bit error) in the DNN computing hardware). As per claim 17, Schorn does not teach the limitations of claim 17. However, Schorn2 teaches: wherein a countermeasure is initiated based on a result of the detection of the at least one physical attack (Schorn2: page 211, last paragraph and page 212, first 3 lines: This is why we choose to employ a small feed-forward neural network for detecting critical errors in the feature activation traces of the CNN that performs the actual classification task. As shown in Fig. 4, the network is designed to predict both, if a critical error is present or not, as well as a corrected task result for the image classifier. The detection part has only a single output neuron with a sigmoid activation function. This outputs a value between 0 and 1, indicating the probability for a critical error being present. Based on the comparison with a threshold τ , it is decided if a critical error is present or not. The correction prediction output layer has as many output neurons as the image classifier CNN, with a softmax activation function to assign probabilities to each of the possible image classes. An argmax function is used to select the predicted class based on the output neuron that gives the highest probability score. The correction output (countermeasure) is only taken, if the detector indicates that a critical error was detected. Page 213: last 3 lines: All experiments are conducted with our own fault injection (physical attack) simulation environment for deep neural networks). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Schorn2 in the invention of Schorn to include the above limitations. The motivation to do so would be to recover the correct output (Schorn2: page 206: 2nd paragraph). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Schorn and prior art of record US 20190238568 to Goswami et al (hereinafter Goswami). As per claim 7, Schorn does not teach the limitations of claim 7. However, Goswami teaches: wherein a termination of an operation of the embedded machine learning model and/or a blocking of inputs for the embedded machine learning model is initiated based on a result of the detection of the at least one physical attack (Goswami: [0053]: Thus, the trained SVM may be utilized to evaluate the distances between intermediate representations of an input image and the means at the various hidden or intermediate layers of the DNN and automatically determine, based on this evaluation, whether or not the input image is a distorted image, i.e. an adversarial attack on the facial recognition engine. [0055] Mitigation of adversarial attacks may be achieved by discarding or preprocessing, e.g., denoising, the affected regions of an input image, depending on the desired implementation. With regard to discarding the input images determined to be adversarial attacks, further processing of the input images may be discontinued in the event that the input image is classified as adversarial). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Goswami in the invention of Schorn to include the above limitations. The motivation to do so would be to maintain as high performance of the DNN based facial recognition engine as possible (Goswami: [0054]). Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Schorn and prior art of record Physical Side-Channel Attacks on Embedded Neural Networks: A Survey by Mendez Real et al (hereinafter Real). As per claim 8, Schorn teaches: The method according to claim 1, wherein: the at least one physical attack is detected, both as a side-channel attack and as a fault injection attack, on an embedded system, and the embedded machine learning model is executed on the embedded system (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded system). Page 2, left column: Hardware Failure: Hardware is affected by reliability threats. For example, high energy particle strikes (such as an Electromagnetic pulse – fault injection attack) can result in bit-flip errors). Schorn does not teach a side-channel attack on an embedded system. However, Real teaches: a side-channel attack on an embedded system (Real: page 1: Abstract: this paper surveys state-of-the-art physical SCA attacks relative to the implementation of embedded DNNs on micro-controllers and FPGAs. Page 6: 3. Threat Model and Attack Motivation: Attacker access assumptions: In the considered threat scenario, the attacker exploits physical measurements (limited to power/EM in this survey) via a physical and close by (or remote) access to the target device. The target device implements a pre-trained DNN model. The attacker passively observes and analyzes physical measurements during the inference operation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Real in the invention of Schorn to include the above limitations. The motivation to do so would be to better understand and categorise literature attacks of today’s classical DNN implementations in order to propose adapted and efficient countermeasures (Real: page 2: paragraph 3). As per claim 18, Schorn teaches: The method according to claim 1, wherein: the at least one physical attack is detected, both as a side-channel attack and as a fault injection attack, on an embedded system, and the embedded machine learning model and the monitoring system are executed on the embedded system (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded system). Page 1: left column, last paragraph: Our approach for anomaly detection is based on a trainable feature activation consistency checker (FACER), receiving traces from the intermediate outputs of a main DNN as input. Fig. 1 depicts our approach. Anomalies in the input lead to inconsistencies in the feature representation which can be detected by FACER. As seen in fig. 1, FACER is also implemented on the same embedded system. Page 2, left column: Hardware Failure: Hardware is affected by reliability threats. For example, high energy particle strikes (such as an Electromagnetic pulse – fault injection attack) can result in bit-flip errors). Schorn does not teach a side-channel attack on an embedded system. However, Real teaches: a side-channel attack on an embedded system (Real: page 1: Abstract: this paper surveys state-of-the-art physical SCA attacks relative to the implementation of embedded DNNs on micro-controllers and FPGAs. Page 6: 3. Threat Model and Attack Motivation: Attacker access assumptions: In the considered threat scenario, the attacker exploits physical measurements (limited to power/EM in this survey) via a physical and close by (or remote) access to the target device. The target device implements a pre-trained DNN model. The attacker passively observes and analyzes physical measurements during the inference operation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Real in the invention of Schorn to include the above limitations. The motivation to do so would be to better understand and categorise literature attacks of today’s classical DNN implementations in order to propose adapted and efficient countermeasures (Real: page 2: paragraph 3). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Schorn and prior art of record US 20220156563 to Zhang et al (hereinafter Zhang). As per claim 12, Schorn teaches: The method according to claim 1, wherein the further machine learning model is designed as an embedded neural network (Schorn: Abstract: The detection of anomalies during the operation of deep neural networks (DNNs) is of essential importance in safety-critical applications, such as autonomous vehicles (embedded system)). Schorn teaches a DNN but does not teach: comprises recurrent structures. However, Zhang teaches: comprises recurrent structures (Zhang: [0035]: other types of DNNs include recurrent neural networks, such as long short-term memory (LSTM), and these types of networks are effective in modeling sequential data. [0041]: As described above, the technique derives from an insight about the nature of adversarial attacks in general, namely, that such attacks typically only guarantee the final target label in the DNN, whereas the labels of intermediate representations are not guaranteed. According to this disclosure, this inconsistency is then leveraged as an indicator that an adversary attack on the DNN is present). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Zhang in the invention of Schorn to include the above limitations. The motivation to do so would be to take a given action with respect to the deployed system upon detecting the adversary attack (Zhang: [0042]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20200247433 to Scharfenberger et al: The present invention relates to a computer-implemented method and a system for testing the output of a neural network (1) having a plurality of layers (11), which detects or classifies objects. The method comprises the step (S1) of reading at least one result from at least one first layer (11) and the confidence value thereof, which is generated in the first layer (11) of a neural network (1), and the step (S2) of checking a plausibility of the result by taking into consideration the confidence value thereof so as to conclude whether the object detection by the neural network (1) is correct or false. The step (S2) of checking comprises comparing the confidence value for the result with a predefined threshold value. In the event that it is concluded in the checking step (S2) that the object detection is false, output of the object falsely detected by the neural network is prevented. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHURI R HERZOG whose telephone number is (571)270-3359. The examiner can normally be reached 8:30AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at (571)272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MADHURI R. HERZOG Primary Examiner Art Unit 2438 /MADHURI R HERZOG/Primary Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Jul 17, 2025
Non-Final Rejection — §102, §103
Oct 20, 2025
Response Filed
Jan 27, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603766
QKD SWITCHING SYSTEM AND PROTOCOLS
2y 5m to grant Granted Apr 14, 2026
Patent 12592925
METHOD AND SYSTEM FOR AUTHENTICATING A USER ON AN IDENTITY-AS-A-SERVICE SERVER WITH A TRUSTED THIRD PARTY
2y 5m to grant Granted Mar 31, 2026
Patent 12592820
SYSTEMS AND METHODS FOR DIGITAL RETIREMENT OF INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587383
METHOD AND SYSTEM FOR OUT-OF-BAND USER IDENTIFICATION IN THE METAVERSE VIA BIOGRAPHICAL (BIO) ID
2y 5m to grant Granted Mar 24, 2026
Patent 12556550
THREAT DETECTION PLATFORMS FOR DETECTING, CHARACTERIZING, AND REMEDIATING EMAIL-BASED THREATS IN REAL TIME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+11.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 662 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month