Prosecution Insights
Last updated: April 19, 2026
Application No. 17/855,350

DETECTION OF RANSOMWARE ATTACK AT OBJECT STORE

Final Rejection §103§112
Filed
Jun 30, 2022
Examiner
GILLESPIE, KAMRYN JORDAN
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
Seagate Technology LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+14.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
17 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
26.4%
-13.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments This communication is in respond to applicant’s amendments filed on 10/02/2025. Claims 1-7, 8-14, 15-20 are pending. Applicant's arguments filed on 08/18/2025 have been fully considered. Arguments directed to claims 1-7 and 8-14 are considered moot in view of the following new ground of rejection below, however arguments directed to claims 15-20 are persuasive. Allowable Subject Matter Claims are deemed 15-20 allowable over prior art of record. The following is an examiner’s statement of reasons for allowance: Applicant’s reply makes evident the reason for allowance, satisfying the record as whole as required by rule 37 CFR 1.104 (e). In this case, the substance of applicant’s remarks filed 10/02/2025 with respect to the amended claim limitations point out the reason claims are patentable over the prior art of record. Thus, the reason for allowance is in all probability evident from the record and no statement for examiner’s reason for allowance is necessary (see MPEP 13202.14). Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The limitation “removing one or more fields from each of the plurality of input/output (IO) requests to generate a plurality of condensed IO requests, wherein the one or more fields includes a sector field and a byte field” of claim 1 is a direct contradiction to further limitations of claims 1 such for example “transforming one or more fields of each of the plurality of condensed IO requests to generate transformed IO requests, wherein transforming the one or more fields comprises scaling each of the sector field and the byte field to be within a predetermined range of -1 to +1”. The term is indefinite because removed fields, as understood within the context of the claims and current state of the art, cannot be transformed. Thus, amendments for independent claim 1, and dependent claims thereof, are not entered as the deficiencies for independent claim 1 equally apply to claims 2-7. For the purposes of examination and following application of prior art, the examiner interprets amended claim 1 in light of previously presented claim 1 as the current amendments are self-contradicting. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) (1 & 3-7) and (8 & 10-14) are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20230409714 A1) in view of Krasser (US 20190026466 A1). Regarding claim 1, Chen teaches: A method ([0019] “These techniques can include, among other things, unique methods for feature engineering/extraction, ensemble methods that combine the predictions of multiple ML models, dynamic model re-training, and the leveraging of federated learning to train application-level and cross-application ML models.”), comprising: receiving a plurality of input/output (IO) requests at an object store ([0030] "each collection agent 202 can collect traces of API calls made by its corresponding microservice 102 during the runtime of application 100 and can send the API call traces to analytics platform 204. Each API call trace can include metadata regarding an API call such as the name/endpoint of the API, the input parameter values, the input parameter types, the response data returned by the callee, the latency of the response, and so on.", [0040] Yet further, although FIG. 2 depicts collection agents 202(1)-(N) as running alongside their respective microservices 102(1)-(N) on physical servers 104, in alternative embodiments these collection agents may run on remote machines separate from microservices 102(1)-(N). In these embodiments, API call trace data generated by the microservices may be sent to the remote collection agents, which may then collect the traces…”, [0109] "Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations." The collection agents acting as data stores are mapped to object stores in that the API call traces are stored(collected) within them in anticipation of further use.) Removing one or more fields from each of the plurality of input/output (IO) requests to generate a plurality of condensed IO requests ([0045] "Collection agent 202 can then compress the transformed call traces at step 308. This compression operation can include applying standard data compression techniques to the API call traces, as well as removing certain data elements in each trace that would not be useful for anomaly detection."); transforming one or more fields of each of the plurality of condensed IO requests to generate transformed IO requests ([0044] "At step 306, collection agent 202 can transform the filtered and/or aggregated API call traces into a format understood by analytics platform 204. In embodiments where the API call traces are already in an appropriate format, this step can be omitted."); and training an ML model using a plurality of the ML model input feature vectors ([0066] “Upon processing and extracting features from each block at step 706, individual API call feature extractor 602 can construct one or more feature vectors based on the extracted features (step 708). Finally, at step 710, individual API call feature extractor 602 can pass the feature vector(s) as input to base ML models 604”). Further regarding claim 1, Chen does not appear to explicitly teach, but in a related art Krasser teaches: combining a predetermined number of transformed IO requests to generate IO trace temporal sequences ([0038] In some examples, the training module 228 can determine the CMs 114 based at least in part on...maximum tree depth, maximum number of trees, regularization parameters, dropout, class weighting, or convergence criteria.",[0237] Each non-leaf node of a decision tree can specify a test of one or more feature values in a feature vector, e.g., comparison(s) of those value(s) to learned threshold(s)." The predetermined number of transformed IO requests (feature values) is mapped to maximum tree depth in that the transformed IO requests (feature values) are nodes and the IO trace temporal sequences are branches of said tree.); Generating machine learning (ML) model input feature vectors by assigning each of the IO trace temporal sequences a ground truth value indicating whether the IO trace temporal sequence represents a ransomware attack ([0099] "In some examples of classification, each leaf can include respective weight values for one or more classes. The operation module 230 can sum the weights for each class over all the trees and pick the class with the highest total weight as classification 116 or another output 244. In some examples, the operation module 230 can apply a logistic or other squashing function, e.g., to each weight before summing or to the sums of the weights.", [0103] "In some examples, CM 220 can be configured to provide a classification 116 for any type of event. In other examples, CM 220 can be configured to provide a classification 116 for events known to be of a particular type. For example, separate CMs 220 can be determined and operated for malware-related events and for targeted-attack events."); Since both Chen and Krasser are from the same field of endeavor as both are directed to classification of cybersecurity events- which is within the same field of endeavor as the claimed invention. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of Chen and Krasser by incorporating the teachings of Krasser into Chen for classification of cyber-attacks as claimed. The motivation to combine is to improve the quality of classifications (Chen, [AB]; Krasser, [AB]). Regarding claim 3, Chen-Krasser teaches the method of claim 1, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of condensed IO requests further comprises transforming one or more fields of each of the plurality of IO requests using one-hot coding (KRASSER [0043] "In some examples, the feature vector can include any of the categorical or discrete-valued features in Table 1, encoded as one-hot or n-hot categorical data."). Regarding claim 4, Chen-Krasser teaches the method of claim 1, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of IO requests using one-hot coding comprises transforming a sector field to a numeric field with values between -1 to +1 (KRASSER [0023] “Some examples herein permit analyzing a data stream including data stored in, e.g., a file, a disk boot sector or partition root sector, or a block of memory, or a portion thereof… Some examples determine or use a classification indicating, e.g., characteristics of a sample,”, [0176] “Each training value 1006 can be or include, e.g., a binary classification value such as a zero or one, a probability that the respective feature vector is associated with malware, or a value that represents both association with malware and confidence in the classification, e.g., a value between −1 and +1.” In the example that the sector of a sample is binarily classified, Chen provides for transforming a sector field to a numeric field with values between -1 to +1). Regarding claim 5, Chen-Krasser teaches the method of claim 1, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of IO trace requests using one-hot coding comprises transforming a byte size field to a numeric field represented by +1 or -1 (KRASSER [0043] “In some examples, the feature vector can include any of the categorical or discrete-valued features in Table 1, encoded as one-hot or n-hot categorical data...Contents of Σ, e.g., ten (or another number of) bytes at the entry point or the beginning of main( ) in an executable file 124 Output(s) of an autoencoder (as discussed below) when provided Σ as input, e.g., when provided bytes at the entry point Size of Σ (e.g., in bytes)”, [0251] “In some examples of regression trees, each first prediction value 1606 is a numerical value indicating a function value. For example, function values of +1 can indicate that a feature vector is associated with malware, and function values of −1 can indicate that a feature vector is not associated with malware.”). Regarding claim 6, Chen-Krasser teaches the method of claim 1, Chen-Krasser further teaches: wherein the plurality of input/output (IO) requests includes a number of known ransomware attack IO requests (KRASSER [0049] "However, if the prediction indicates that the API call is anomalous, prediction validator 216 can determine whether this anomaly is relevant to the operation of microservice-based application 100 in terms of security, performance, and/or other dimensions (step 508). An example of an anomaly that is security relevant is one that is indicative of a known attack."). Regarding claim 7, Chen-Krasser teaches the method of claim 1, Chen-Krasser further teaches: wherein combining a predetermined number of transformed IO requests comprises combining 256 transformed IO requests (KRASSER [0042] “The broad model 114 is depicted as covering a feature space. The feature space is graphically represented (without limitation) by two orthogonal axes. Throughout this document, a “feature vector” is a collection of values associated with respective axes in the feature space...The feature space can have any number N of dimensions, N≥1. In some examples, features can be determined by a feature extractor, such as a previously-trained CM or a hand-coded feature extractor.” In the example that a feature extractor determines 256 features (256 transformed requests) to be combined within a feature space, Krasser provides for a predetermined number of transformed IO requests comprising combining 256 transformed IO requests.). Regarding claim 8 Chen-Krasser teaches: In a computing environment, a method performed at least in part on at least one processor, (CHEN [0107] “Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations. The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system.”). the method comprising: receiving a plurality of input/output (IO) requests at an object store (CHEN [0030] "each collection agent 202 can collect traces of API calls made by its corresponding microservice 102 during the runtime of application 100 and can send the API call traces to analytics platform 204. Each API call trace can include metadata regarding an API call such as the name/endpoint of the API, the input parameter values, the input parameter types, the response data returned by the callee, the latency of the response, and so on.", [0040] Yet further, although FIG. 2 depicts collection agents 202(1)-(N) as running alongside their respective microservices 102(1)-(N) on physical servers 104, in alternative embodiments these collection agents may run on remote machines separate from microservices 102(1)-(N). In these embodiments, API call trace data generated by the microservices may be sent to the remote collection agents, which may then collect the traces…”, [0109] "Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations." The collection agents acting as data stores are mapped to object stores in that the API call traces are stored(collected) within them in anticipation of further use.); removing one or more fields from each of the plurality of input/output (IO) requests to generate a plurality of condensed IO requests (CHEN [0045] "Collection agent 202 can then compress the transformed call traces at step 308. This compression operation can include applying standard data compression techniques to the API call traces, as well as removing certain data elements in each trace that would not be useful for anomaly detection."); transforming one or more fields of each of the plurality of condensed IO requests to generate transformed IO requests (CHEN [0044] "At step 306, collection agent 202 can transform the filtered and/or aggregated API call traces into a format understood by analytics platform 204. In embodiments where the API call traces are already in an appropriate format, this step can be omitted."); combining a predetermined number of transformed IO requests to generate IO trace temporal sequences (KRASSER [0038] In some examples, the training module 228 can determine the CMs 114 based at least in part on...maximum tree depth, maximum number of trees, regularization parameters, dropout, class weighting, or convergence criteria.",[0237] Each non-leaf node of a decision tree can specify a test of one or more feature values in a feature vector, e.g., comparison(s) of those value(s) to learned threshold(s)." The predetermined number of transformed IO requests (feature values) is mapped to maximum tree depth in that the transformed IO requests (feature values) are nodes and the IO trace temporal sequences are branches of said tree.); generating machine learning (ML) model input feature vectors by assigning each of the IO trace temporal sequences a ground truth value indicating whether the IO trace temporal sequence represents a ransomware attack(KRASSER [0099] "In some examples of classification, each leaf can include respective weight values for one or more classes. The operation module 230 can sum the weights for each class over all the trees and pick the class with the highest total weight as classification 116 or another output 244. In some examples, the operation module 230 can apply a logistic or other squashing function, e.g., to each weight before summing or to the sums of the weights.", [0103] "In some examples, CM 220 can be configured to provide a classification 116 for any type of event. In other examples, CM 220 can be configured to provide a classification 116 for events known to be of a particular type. For example, separate CMs 220 can be determined and operated for malware-related events and for targeted-attack events."), wherein the predetermined number of transformed IO requests depends on a speed required to the detect ransomware attack (CHEN [0034] “the multiple API call analyzer can generate the prediction by determining the number of times the API call of the trace was made within a certain time window and evaluating that call count against the baseline. For instance, if an API call was made 1000 times within a time period of 10 minutes when the normal call volume for the API call is typically 100 per 10 minutes, that would be indicative of a volume-based attack and those API calls would be deemed anomalous.” One of ordinary skill in the art would appreciate that speed is merely a measure of frequency over a period of time such as calls per minute or frames per second.); and training an ML model using a plurality of the ML model input feature vectors (CHEN [0066] “Upon processing and extracting features from each block at step 706, individual API call feature extractor 602 can construct one or more feature vectors based on the extracted features (step 708). Finally, at step 710, individual API call feature extractor 602 can pass the feature vector(s) as input to base ML models 604”). Regarding claim 10, Chen-Krasser teaches the method of claim 8, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of condensed IO requests further comprises transforming one or more fields of each of the plurality of IO requests using one-hot coding (KRASSER [0043] "In some examples, the feature vector can include any of the categorical or discrete-valued features in Table 1, encoded as one-hot or n-hot categorical data."). Regarding claim 11, Chen-Krasser teaches the method of claim 8, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of IO requests using one-hot coding comprises transforming a sector field to a numeric field with values between -1 to +1 (KRASSER [0023] “Some examples herein permit analyzing a data stream including data stored in, e.g., a file, a disk boot sector or partition root sector, or a block of memory, or a portion thereof… Some examples determine or use a classification indicating, e.g., characteristics of a sample,”, [0176] “Each training value 1006 can be or include, e.g., a binary classification value such as a zero or one, a probability that the respective feature vector is associated with malware, or a value that represents both association with malware and confidence in the classification, e.g., a value between −1 and +1.” In the example that the sector of a sample is binarily classified, Chen provides for transforming a sector field to a numeric field with values between -1 to +1). Regarding claim 12, Chen-Krasser teaches the method of claim 8, Chen-Krasser further teaches: wherein transforming one or more fields of each of the plurality of IO trace requests using one-hot coding comprises transforming a byte size field to a numeric field represented by +1 or -1 (KRASSER [0043] “In some examples, the feature vector can include any of the categorical or discrete-valued features in Table 1, encoded as one-hot or n-hot categorical data...Contents of Σ, e.g., ten (or another number of) bytes at the entry point or the beginning of main( ) in an executable file 124 Output(s) of an autoencoder (as discussed below) when provided Σ as input, e.g., when provided bytes at the entry point Size of Σ (e.g., in bytes)”, [0251] “In some examples of regression trees, each first prediction value 1606 is a numerical value indicating a function value. For example, function values of +1 can indicate that a feature vector is associated with malware, and function values of −1 can indicate that a feature vector is not associated with malware.”). Regarding claim 13, Chen-Krasser teaches the method of claim 8, Chen-Krasser further teaches: wherein the plurality of input/output (IO) requests includes a number of known ransomware attack IO requests (KRASSER [0049] "However, if the prediction indicates that the API call is anomalous, prediction validator 216 can determine whether this anomaly is relevant to the operation of microservice-based application 100 in terms of security, performance, and/or other dimensions (step 508). An example of an anomaly that is security relevant is one that is indicative of a known attack."). Regarding claim 14, Chen-Krasser teaches the method of claim 8, Chen-Krasser further teaches: wherein combining a predetermined number of transformed IO requests comprises combining 256 transformed IO requests (KRASSER [0042] “The broad model 114 is depicted as covering a feature space. The feature space is graphically represented (without limitation) by two orthogonal axes. Throughout this document, a “feature vector” is a collection of values associated with respective axes in the feature space...The feature space can have any number N of dimensions, N≥1. In some examples, features can be determined by a feature extractor, such as a previously-trained CM or a hand-coded feature extractor.” In the example that a feature extractor determines 256 features (256 transformed requests) to be combined within a feature space, Krasser provides for a predetermined number of transformed IO requests comprising combining 256 transformed IO requests.). Claim(s) 2 & 9 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Krasser as applied to claim 1 above, and in further view of Tellez (US 11843622 B1). Regarding claim 2, Chen in view of Krasser teaches the limitations previously demonstrated. Chen in view of Krasser does not appear to explicitly teach, but in a related art Tellez teaches: The method of claim 1, wherein combining a predetermined number of transformed IO requests further comprises generating a flat file using raw data from the predetermined number of transformed IO requests (col 7, lines 13-17 “As described above, the system stores the events in a data store. The events stored in the data store are field-searchable, where field-searchable herein refers to the ability to search the machine data (e.g., the raw machine data)…”, col. 146, 20-24 “FIG. 11C illustrates an illustrative example of how machine data can be stored in a data store in accordance with various disclosed embodiments. In other embodiments, machine data can be stored in a flat file). Since Chen, Krasser, and Tellez are from the same field of endeavor as being directed to classification of cybersecurity events- which is within the same field of endeavor as the claimed invention. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of Chen in view Krasser and Tellez by incorporating the teachings of Tellez into Chen in view of Krasser for classification of cyber-attacks as claimed. The motivation to combine is to improve the quality of classifications (Chen, [AB]; Krasser, [AB]; Tellez [AB]). Claims (9) recite substantially similar limitations as claim 2 in the forms of: A method performed at least in part on at least one processor in a computing system for implementing the corresponding method (as previously demonstrated by CHEN-KRASSER), therefore, they are rejected under similar rationale. Chen teaches: In a computing environment, a method performed at least in part on at least one processor, ([0107] “Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations. The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system.”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kamryn Gillespie whose telephone number is 703-756-5498. The examiner can normally be reached on Monday through Thursday from 9am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.J.G./Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
Apr 16, 2025
Non-Final Rejection — §103, §112
Sep 24, 2025
Interview Requested
Oct 01, 2025
Applicant Interview (Telephonic)
Oct 02, 2025
Response Filed
Oct 03, 2025
Examiner Interview Summary
Dec 08, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596795
DETECTING A CURRENT ATTACK BASED ON SIGNATURE GENERATION TECHNIQUE IN A COMPUTERIZED ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596796
Self-synchronous Side-Channel Attack Countermeasure
2y 5m to grant Granted Apr 07, 2026
Patent 12554859
GENERATING 3-DIMENSIONAL MODELS AND CONNECTIONS TO PROVIDE VULNERABILITY CONTEXT
2y 5m to grant Granted Feb 17, 2026
Patent 12518004
MITIGATING POINTER AUTHENTICATION CODE (PAC) ATTACKS IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Jan 06, 2026
Patent 12511376
METHOD, SYSTEM, AND TECHNIQUES FOR PREVENTING ANALOG DATA LOSS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+50.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month