Prosecution Insights
Last updated: April 19, 2026
Application No. 18/767,817

CYBER THREAT DEFENSE SYSTEM AND METHOD

Non-Final OA §101§103§112§DP
Filed
Jul 09, 2024
Examiner
AHMED, MAHABUB S
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Darktrace Holdings Limited
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
247 granted / 289 resolved
+27.5% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
306
Total Applications
across all art units

Statute-Specific Performance

§101
17.3%
-22.7% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 289 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication (preliminary amendment) filed on 02/21/2025. Status of claims in the instant application: Claims 21-35 are pending. Claims 1-20 have been canceled. Claims 21-35 have been newly added. Election/Restrictions No claim restrictions warranted at the applicant’s initial time of filing for patent. Priority This application is a CON of 17/187,383 filed on 02/26/2021 which claims benefit of 62/983,307 filed on 02/28/2020. Information Disclosure Statement Information Disclosure Statements (IDS) filed on 07/09/2024 have been considered, and a signed copies of the IDS forms have been attached to this office action. Drawings Drawings filed on 07/09/2024 have been inspected, and it’s in compliance with MPEP 608.02. Specification Specification filed on 07/09/2024 has been inspected and it’s in compliance with MPEP 608.01. Claim Objections Claim 23 is objected to for a typographical error. Claim 23 recites “… content of a-computer file”. This should be “… content of [[a-computer]] a computer file” Appropriate correction required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Claims 21, 23, 24, 25, 26, 2 and 28 recite the limitations: “a network module configured to ingest network data …”, “an analyzer module configured to cooperate with one or more machine learning models and the network module … ”, “a natural language processing module configured to analyse the network data metrics,”, “an artificial intelligence classifier configured to receive outputs of”, “an email module configured to ingest email data …” Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Examiner has investigated the published disclosure (US 20240364728 A1 - specification, drawing …) and finds only the following descriptions related to their functionality only. network module: This nonce (place holder) term is only described for its functionality, as can be found in FIG.2, Para [0012, 0095, 0103, 0106, 0161, 0245]. There is no description, definition, structure or algorithm, and the associated hardware found in the disclosure. analyzer module: This nonce (place holder) term is only described for its functionality only, as can be found in FIG.2, Para [0012, 0018, 0020, 0101, 0102, 0104-0107, 0109-0110, 0112, 0116, 0162, 0245, 0261, 0266, 0268]. There is no description, definition, structure or algorithm, and the associated hardware found in the disclosure. natural language processing module: This nonce (place holder) term is only described for its functionality only, as can be found in Para [0015-0017, 0253, 0256, 0259]. There is no description, definition, structure or algorithm, and the associated hardware found in the disclosure. artificial intelligence classifier: This nonce (place holder) term is only described for its functionality only, as can be found in Para [0012, 0014, 0024, 0245, 0250, 0265, 0277]. There is no description, definition, structure or algorithm, and the associated hardware found in the disclosure. email module: This nonce (place holder) term is only described for its functionality only, as can be found in FIG.2, Para [0018, 0161, 0261]. There is no description, definition, structure or algorithm, and the associated hardware found in the disclosure. However, none of the sections/paragraphs noted above that do mention the place holder (nonce) terms, that are recited in the claims as identified previously, provide any more descriptions of them as to what algorithms and structures (hardware) are used to implement them, and then positively link them to he recited/claimed functions. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim limitations as identified previously, for claims 21-28 invoke interpretation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Appropriate correction required. Claim 23-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 23 recites the limitation, “The system of claim 22, wherein a second machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file names or file extensions of the and/or content of a-computer file” The use of “and/or” in claim limitations make claims indefinite/ambiguous, as “and” requires all the terms in the claim limitation, but “or” makes one of the terms as optional. Dependent claims 24-25 inherits the issue from claim 23, and are also similarly rejected. Appropriate correction required. *** Note: For examination purpose the claim limitations are interpreted to recite only “or” instead of “and/or”. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 21-28 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim limitations as identified previously, for claims 1-10 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claims are rejected as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Appropriate correction required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-25, 27-33 and 35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 21 recites: “A cyber threat defense system comprising: a network module configured to ingest network data associated with network structures, network devices or network users; an analyzer module configured to cooperate with one or more machine learning models and the network module, wherein a first machine learning model of the one or more machine learning models is configured to evaluate the network data, identify metrics associated with the network data, and cooperate with the network module to output a score indicative of whether anomalous network data metrics are caused by a cyber threat; and a natural language processing module configured to analyse the network data metrics, wherein the network data metrics are associated information pertaining to data that has left the network or has been altered.” As recited in claim 21, the determination of cyber threat based on a score can be considered a comparison of a certain score/metrics against a threshold and indicate a threat if score is above the threshold. This limitation of claim 21 can be considered a mental step/process, as it can reasonably be performed in human mind (with the aid of pencil and paper). The natural language processing of certain network activity (i.e. exfiltration of data) can also be considered a mental step, as it can reasonably performed by analyzing network activity log to determine commands that caused to data to be sent out of the system The claim does not contain any other limitation than can be considered to incorporate the above mentioned mental process into a practical application. Furthermore, the nominal recitation of machine learning does not add anything significantly more into the claim. Therefore, claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Depended claims 22-25 and 27-28 are also rejected for reasons similar to that of claim 21. The additional elements recited in these dependent claims do recite more of the same mental process and/or receiving/transmitting data. The receiving/transmitting/storing of data is just an insignificant extra solution activity performed as ordinary step of data communication. Therefore, dependent claims are also rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 29-33 and 35 are also rejected for reasons similar to that of claims 21-25 and 27-28. Appropriate corrections required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 21-35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. US 12069073 B2 Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are just broader version of claims of the issued patent US 12069073 B2 that make the claims of the instant application obvious. Instant Application Reference Patent (12069073) 21. (New) A cyber threat defense system comprising: a network module configured to ingest network data associated with network structures, network devices or network users; an analyzer module configured to cooperate with one or more machine learning models and the network module, wherein a first machine learning model of the one or more machine learning models is configured to evaluate the network data, identify metrics associated with the network data, and cooperate with the network module to output a score indicative of whether anomalous network data metrics are caused by a cyber threat; and a natural language processing module configured to analyse the network data metrics, wherein the network data metrics are associated information pertaining to data that has left the network or has been altered. 22. (New) The system of claim 21, wherein the data corresponds to a computer file and the information pertaining to the data comprises at least file names and file extensions of at least one computer file that has left the network or has been altered. 1. A cyber threat defense system comprising: a network module configured to ingest network data associated with network structures, network devices and network users; an analyzer module configured to cooperate with one or more machine learning models and the network module, wherein a first machine learning model of the one or more machine learning models is configured to evaluate the network data, identify metrics associated with the network data and then cooperate with the network module to output a score indicative of whether anomalous network data metrics are caused by a cyber threat; an artificial intelligence classifier configured to receive outputs of each of the one or more machine learning models, determine a probability that a cybersecurity breach has occurred, and transmit a message to an autonomous response module based on the determined probability of a cybersecurity breach; and a natural language processing module configured to analyse the network data metrics, wherein the network data metrics are associated with at least file names and file extensions of at least one computer file that has left the network or has been altered. 23. (New) The system of claim 22, wherein a second machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file names or file extensions of the and/or content of a-computer file. 4. The system of claim 1, wherein a second machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file names or file extensions of the computer file. 24. (New) The system of claim 23, wherein the network data metrics are further associated with file extensions and mime types of computer files that have been altered, and wherein a third machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file extensions and mime types. 5. The system of claim 1, wherein the network data metrics are further associated with file extensions and mime types of computer files that have been altered, and wherein a third machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file extensions and mime types. 25. (New) The system of claim 24, wherein the network data metrics are further associated with network communication protocols, and wherein a fourth machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the network communication protocols. 6. The system of claim 1, wherein the network data metrics are further associated with network communication protocols, and wherein a fourth machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the network communication protocols. 26. (New) The system of claim 21 further comprising: an artificial intelligence classifier configured to receive outputs of each of the one or more machine learning models, determine a probability that a cybersecurity breach has occurred, and transmit a message identifying the probability of the cybersecurity breach. 3. The system of claim 2, wherein the first machine learning model is configured to map the network data metrics and the probability distribution to a continuous shape and wherein the artificial intelligence classifier is configured to identify anomalous network data metrics based on a comparison between the probability distribution and the network data metrics. 27. (New) The system of claim 21 further comprising: an email module configured to ingest email data, wherein the analyzer module is further configured to receive the email data, wherein a second machine learning model of the one or more machine learning models is configured to identify metrics associated with unencrypted email protocols or unencrypted email header information, and wherein the second machine learning model is further configured to determine a score based on predetermined text strings identified in the unencrypted email protocols or unencrypted email header information. 7. The system of claim 1, further comprising: an email module configured to ingest email data and a natural language processing module, wherein the analyzer module is further configured to receive the email data, wherein a fifth of the one or more machine learning models is configured to identify metrics associated with unencrypted email protocols or unencrypted email header information, and wherein the fifth machine learning model is further configured to determine a score based on predetermined text strings identified in the unencrypted email protocols or unencrypted email header information, wherein the fifth machine learning model is further configured to utilize one or more unsupervised machine learning algorithms. 28. (New) The system of claim 26, wherein the analyzer module is further configured to form a hypothesis relating to whether a cybersecurity breach has occurred and provide outputs of the one or more machine learning models to the artificial intelligence classifier when the hypothesis is resolved to thereby continually train the artificial intelligence classifier during its operational life or deployment to identify cybersecurity breaches. 9. The system of claim 1, wherein the analyzer module is further configured to form a hypothesis relating to whether a cybersecurity breach has occurred and provide outputs of the one or more machine learning models to the artificial intelligence classifier when the hypothesis is resolved to thereby continually train the artificial intelligence classifier during its operational life or deployment to identify cybersecurity breaches. Note: Claims 29-35 are also rejected for reasons similar to the ones highlighted in the table above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-26 and 29-34 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20180248895 A1 to Watson et al. (hereinafter “Watson”) in view of Pub. No.: US 20190108355 A1 to Carson (hereinafter “Carson”). Regarding Claim 21. (New) Watson discloses A cyber threat defense system (Watson, Para [0005], FIG. 3: … FIG. 3 illustrates an example system that can be used to detect anomalous behavior with respect to customer documents that can be utilized in accordance with various embodiments …) comprising: a network module configured to ingest network data associated with network structures, network devices or network users (Watson, Para [0029], FIG. 3: … As with the environment of FIG. 1, a customer can utilize a client device, here operating a customer console 102, to access resources of a resource provider environment 106 across at least one network 104. As mentioned, this can be used to store customer data, such as to at least one data repository 214, and documents to at least one document store 212, among other such options. The customer in some embodiments can utilize the customer console 102 to specify security settings that can be utilized by an access manager 208, or other such system or service, to control access to various data and documents stored for the customer …); an analyzer module configured to cooperate with one or more machine learning models and the network module (Watson, Para [0033], FIG. 3: … FIG. 3 illustrates an example system 300 that can be used to determine anomalous behavior for classified documents in accordance with various embodiments. As with the previous example, this system can include an activity monitor 204 that can monitor the accessing of various documents, data, and other objects by users of the system. The information can be stored to a location such as a log data store 206 or other such location. Information for each activity can be fed to a classifier service 302, or pulled from an activity queue by the classifier service, among other such options. In some embodiments the activity data can be processed individually while in other embodiments the data may be processed in batches, such as in batches of thirty-two activity entries, in order to prevent the training of the neural network from consuming excessive resources, etc. In some embodiments aggregations or summaries of user activity over a period of time can be processed, which can be more cost and resource efficient than using the raw data. For example, all service interactions for a user can be summarized using a technology such as Apache Spark, an open source cluster computing framework …), wherein a first machine learning model of the one or more machine learning models is configured to evaluate the network data, identify metrics associated with the network data (Watson, Para [0034], FIG. 3: … In this example, the activity data is received to the classifier service and processed by a recurrent neural network (RNN). It should be understood that other types of neural networks (i.e., convolutional or generative adversarial networks) can be used as well within the scope of the various embodiments. The activity data can be processed by the RNN of the classifier service initially to predict the activity of various users over a determined future period of time. This information can be determined by the trained RNN and based upon information such as past behavior of the user and behavior of the user's peers, among other such options. The predicted or expected behavioral data can be stored to a behavior data store 310 or other such location …), and cooperate with the network module to output a score indicative of whether anomalous network data metrics are caused by a cyber threat (Watson, Para [0035, 0024], FIG. 3: … When subsequent activity is detected, that information can be fed to the classifier service 302 for analysis. The activity data can be processed with the RNN to identify whether any of the activity falls outside the expected behavior in such a way as to be labeled suspicious. In some embodiments there may be various thresholds, values, or ranges for which activity is determined to be suspicious when deviating from expected behavior, while in other embodiments the RNN can be trained to expect variations and only flag these types of activities as suspicious, among other such options. It might be the case, however, that usage over a specific period may appear to be suspicious … The service can also determine a risk level or score for each document. The service can identify and surface anomalous behavior or engagement with data, alerting the customer of a potential breach or attack. For example, if a customer service representative attempts to access confidential personnel files from a foreign IP address, the service might identify this activity as anomalous and generate an alert. In at least some embodiments the service can also assign a risk score or security level to each such document, whereby the determination as to whether to generate an alert can be based at least in part to the risk score(s) of the document(s) accessed …); and a natural language processing module configured to analyse the network data metrics (Watson, Para [0024, 0041], Claim 16: … In some embodiments such a service can utilize artificial intelligence (AI) or machine learning to locate and track customer intellectual property, such as email, documents, and spreadsheets, as well as associated data and other objects. In some embodiments this can include documents stored across a customer network as well. Such a service can leverage natural language processing (NLP) to understand information such as the topic, business relevance, and value of each document or piece of data … In some embodiments natural language understanding (NLU) can be used to determine topics and concepts, or words relevant to those concepts, which can be vectorized into a vector space to assemble the topics and determine their distance in vector space. The vectors and space may be generated using, for example, linear discriminant analysis (LDA) or principal component analysis (PCA), among other such options …), [wherein the network data metrics are associated information pertaining to data that has left the network or has been altered]. However, Watson does not explicitly teach, but Carson from same or similar field of endeavor teaches: “wherein the network data metrics are associated information pertaining to data that has left the network or has been altered (Carson, Abstract, Paras [0073]: … Provided herein are systems and methods for preventing or controlling data movement. A learning engine may detect capabilities of a computing environment for allowing data access or transfer, and activities relating to data access or transfer from the computing environment. Data assets of the computing environment that are protected may be identified, according to metadata of the data assets. The learning engine may, according to the identified data assets and at least one of the detected capabilities or activities, determine a situation within the computing environment that represents potential or actual exfiltration of one of the identified data assets … The learning engine 240 may compare the recognized text on each detected GUI element 215a-n (and/or any available meta-data) to a predefined list of meta-data, words, or phrases indicative of data egress from the computing environment 205. Examples of meta-data, words, or phrases indicative of data egress may include “Send”, “Forward”, “Transfer”, “Print”, “Attach”, “Upload”, and “Copy Over”, among others. In some embodiments, the learning engine 240 may apply natural language processing techniques (e.g., lexical semantics, parsing, semantic-search knowledge database) to the recognized text on each GUI element 215a-n to determine whether the text includes meta-data, words, or phrases indicative of data egress from the computing environment 205. Based on the recognized text on each detected GUI element 215a-n, the learning engine 240 may identify the capabilities of the application 210 in accessing the data asset storage 225 or transferring the data assets 230 …)” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Carson into the teachings of Watson, because it discloses that, “Described herein are systems and methods for preventing or controlling misuse of data or file (e.g., exfiltration, opening, storing, downloading, uploading, movement). To protect against unauthorized exfiltration of data assets, the present systems and methods may execute preventive or counter measures based on identifying potentially compromising situations on a computing environment using machine learning techniques (Carson, Para [0003])”. Regarding Claim 22. (New) The combination of Watson-Carson discloses the system of claim 21, Carson further discloses, “wherein the data corresponds to a computer file and the information pertaining to the data comprises at least file names and file extensions of at least one computer file that has left the network or has been altered (Carson, Para [0063, 0067, 0074, 0079]: … In some embodiments, an exfiltration controller executing in the computing environment may identify data assets through the use of associated metadata. Such data assets may include document files, data strings (e.g., personal or security identifiers), images, audio, or any other file or data residing in the computing environment. The data assets that are to be protected may be identified through a combination of information context and content inspection. Information context may include an address, a file/data type, date and/or time of creation or update, location, source, and/or a file/data owner/author, among others, of the data asset … The data asset storage 225 may include one or more data assets 230. The data asset storage 225 may correspond to one or more directories maintaining, storing or otherwise including the data assets 230. Each data asset 230 may correspond to one or more files (e.g., document files, spreadsheet files, electronic emails, database files, image files, audio files, video files) stored within or otherwise accessible from the computing environment 205. Each data asset 230 may be stored on the storage 128, main memory 122, cache memory 140, I/O devices 130a-n, or any other computer readable storage medium connected to or within the computing environment 205. Each data asset 230 of the data asset storage 225 may have one or more attributes. Each data asset 230 may be associated with a residing location. The residing location may be a file pathname that may indicate a drive letter, volume, server name, root directory, sub-directory, file name, and/or extension among others …).” The motivation to further combine Carson remains same as in claim 21. Regarding Claim 23. (New) The combination of Watson-Carson discloses the system of claim 22, Watson further discloses, “wherein a second machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file names or file extensions of the and/or content of a-computer file (Watson, Para [0030]: … In at least some embodiments there can be any arbitrary content stored to the data stores 214 and/or document stores 212 on behalf of a customer, such as an organization. In at least some embodiments it can be desirable to analyze this content to provide visibility into the types of data, documents, and other objects stored on behalf of the customer. In this example a crawler 210 can be used to locate and analyze the various documents (and other data, etc.) stored on behalf of a customer. The crawler can include various data crawling algorithms for purposes of locating, parsing, and evaluating contents of the data, such as to analyze words, numbers, strings, or patterns contained therein. The crawler 210 can also include, or work with, a classifier algorithm that can classify the various documents, such as to assign one or more topics to each document. The crawler 210 can also include, or work with, one or more risk assessment algorithms that can determine a risk score for each (or at least a subset of) the documents. In at least some embodiments the risk score can be a composite of various metrics for regular expressions, which can be based at least in part upon the presence of various topics and themes in the documents …).” Regarding Claim 24. (New) The combination of Watson-Carson discloses the system of claim 23, Carson further discloses, “wherein the network data metrics are further associated with file extensions (Carson, Para [0069]: … Each data asset 230 of the data asset storage 225 may have one or more attributes. Each data asset 230 may be associated with a residing location. The residing location may be a file pathname that may indicate a drive letter, volume, server name, root directory, sub-directory, file name, and/or extension among others …) and Watson further discloses, “mime types of computer files that have been altered (Watson, Para [0031]: … Various embodiments can also isolate features of the documents that indicate risk for a business, such as by recognizing pharmaceutical documents as being something that is accessible to only a small quantity of people within a business. This can help to learn types of data and associated topics for use in generating risk scores and access patterns, etc. …), and wherein a third machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the file extensions and mime types (Watson, Para [0030]: … In at least some embodiments there can be any arbitrary content stored to the data stores 214 and/or document stores 212 on behalf of a customer, such as an organization. In at least some embodiments it can be desirable to analyze this content to provide visibility into the types of data, documents, and other objects stored on behalf of the customer. In this example a crawler 210 can be used to locate and analyze the various documents (and other data, etc.) stored on behalf of a customer. The crawler can include various data crawling algorithms for purposes of locating, parsing, and evaluating contents of the data, such as to analyze words, numbers, strings, or patterns contained therein. The crawler 210 can also include, or work with, a classifier algorithm that can classify the various documents, such as to assign one or more topics to each document. The crawler 210 can also include, or work with, one or more risk assessment algorithms that can determine a risk score for each (or at least a subset of) the documents. In at least some embodiments the risk score can be a composite of various metrics for regular expressions, which can be based at least in part upon the presence of various topics and themes in the documents …).” The motivation to further combine Carson remains same as in claim 21. Regarding Claim 25. (New) The combination of Watson-Carson discloses the system of claim 24, Watson further discloses, “wherein the network data metrics are further associated with network communication protocols (Watson, Para [0055]: … Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, FTP, UPnP, NFS, and CIFS. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof …), and wherein a fourth machine learning model of the one or more machine learning models is configured to determine a score based on predetermined text strings identified in the network communication protocols (Watson, Para [0030]: … In at least some embodiments there can be any arbitrary content stored to the data stores 214 and/or document stores 212 on behalf of a customer, such as an organization. In at least some embodiments it can be desirable to analyze this content to provide visibility into the types of data, documents, and other objects stored on behalf of the customer. In this example a crawler 210 can be used to locate and analyze the various documents (and other data, etc.) stored on behalf of a customer. The crawler can include various data crawling algorithms for purposes of locating, parsing, and evaluating contents of the data, such as to analyze words, numbers, strings, or patterns contained therein. The crawler 210 can also include, or work with, a classifier algorithm that can classify the various documents, such as to assign one or more topics to each document. The crawler 210 can also include, or work with, one or more risk assessment algorithms that can determine a risk score for each (or at least a subset of) the documents. In at least some embodiments the risk score can be a composite of various metrics for regular expressions, which can be based at least in part upon the presence of various topics and themes in the documents. One advantage to such an approach is that a customer can utilize the customer console 102 or another such mechanism to gain visibility into the type of content that is stored for the customer, as well as the risk associated with that content. In at least some embodiments the customer can also have the ability to view the content, as well as the assigned topics and risk scores, and make adjustments that the customer deems appropriate. These adjustments can then be used to further train the neural network in order to improve future classifications and score determinations. In at least some embodiments, the customer can also view patterns or types of access to the various documents, lists of users or peer groups who access specific documents or topics, documents with specific risk scores, and the like …).” Regarding Claim 26. (New) The combination of Watson-Carson discloses the system of claim 21, Watson further discloses, “further comprising: an artificial intelligence classifier configured to receive outputs of each of the one or more machine learning models, determine a probability that a cybersecurity breach has occurred, and transmit a message identifying the probability of the cybersecurity breach (Watson, Para [0034-0036]: … the activity data is received to the classifier service and processed by a recurrent neural network (RNN). It should be understood that other types of neural networks (i.e., convolutional or generative adversarial networks) can be used as well within the scope of the various embodiments. The activity data can be processed by the RNN of the classifier service initially to predict the activity of various users over a determined future period of time. This information can be determined by the trained RNN and based upon information such as past behavior of the user and behavior of the user's peers, among other such options. The predicted or expected behavioral data can be stored to a behavior data store 310 or other such location … approaches in accordance with various embodiments can attempt to smooth the results of the RNN, such as by utilizing a Kalman filter or other such algorithm. A Kalman filter is used to generate a linear quadratic estimation based on a series of measurements observed over time that can contain noise and other inaccuracies to generate an estimate that is more accurate that might be based on a single time period alone. The Kalman filter can be used when predicting user behavior as well as determining whether specific activity is suspicious, or unacceptably outside the predicted behavior, among other such options. In one example, a Kalman filter takes a time series of activity for a given user, such as the number of documents downloaded or number of API calls, etc., over multiple periods of time. The results over the different time series can be smoothed with the Kalman filter to, with some minor training, generate more active predictions that would otherwise be generated with the RNN alone. In at least some embodiments the RNN and Kalman filter can be used concurrently, with the RNN generating the individual feature predictions that get smoothed out by the Kalman filter. The smoothed results can then be provided to a trained (high level) classifier algorithm, which can ultimately determine whether the activity is suspicious such that an alarm should be generated or other such action taken … In some embodiments the aggregate error score will be compared against the peers of the user, and error scores that deviate from the peer scores by more than a threshold amount may be reported as suspicious, in order to account for unexpected variations that are experienced by a group of peers, and thus less likely to be suspicious activity on behalf of any individual user of that peer group …).” Regarding Claim 29. This claim recites all the same or similar limitations as claim 21, and hence similarly rejected as claim 21. ***: Note: Watson also discloses “A non-transitory computer-readable medium including software that, when executed with one or more processors, detect a cyber threat (Watson: Para [0048, 0059], FIG. 8)”. Regarding Claim 30. This claim recites all the same or similar limitations as claim 22, and hence similarly rejected as claim 22. Regarding Claim 31. This claim recites all the same or similar limitations as claim 23, and hence similarly rejected as claim 23. Regarding Claim 32. This claim recites all the same or similar limitations as claim 24, and hence similarly rejected as claim 24. Regarding Claim 33. This claim recites all the same or similar limitations as claim 25, and hence similarly rejected as claim 25. Regarding Claim 34. This claim recites all the same or similar limitations as claim 26, and hence similarly rejected as claim 26. Claims 27 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20180248895 A1 to Watson et al. (hereinafter “Watson”) in view of Pub. No.: US 20190108355 A1 to Carson (hereinafter “Carson”), as applied to claim 21 above, and further in view of Pub. No.: US 20090265612 A1 to Cheney (hereinafter “Cheney”). Regarding Claim 27. (New) The combination of Watson-Carson discloses the system of claim 21, Watson further discloses, “further comprising: an email module configured to ingest email data, wherein the analyzer module is further configured to receive the email data, wherein a second machine learning model of the one or more machine learning models is configured to identify metrics associated with [unencrypted] email protocols or [unencrypted] email header information (Watson, Para [0054-0055]: … The various embodiments can be further implemented in a wide variety of operating environments, which in some cases can include one or more user computers or computing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or notebook computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols … …)”, and Carson further discloses, “wherein the second machine learning model is further configured to determine a score based on predetermined text strings identified in the [unencrypted] email protocols or [unencrypted] email header information (Carson, Para [0009, 0069]: … In some embodiments, the learning engine may identify a first data asset that is protected, by monitoring or identifying at least one of: a residing location of the first data asset, an owner of the first data asset, a type of the first data asset, or whether some or all of the first data asset includes classified or sensitive data. In some embodiments, the computing environment may include at least one of a web browser, an application, a background system service, or an input/output (I/O) device. In some embodiments, the application may include a cloud-synchronization application, an electronic-mail application, a document processing or rendering application, a data transfer or copying application, or a facsimile or printing application … The data asset storage 225 may include one or more data assets 230. The data asset storage 225 may correspond to one or more directories maintaining, storing or otherwise including the data assets 230. Each data asset 230 may correspond to one or more files (e.g., document files, spreadsheet files, electronic emails, database files, image files, audio files, video files) stored within or otherwise accessible from the computing environment 205 …).” The motivation to further combine Carson remains same as in claim 21. However the combination of Watson-Carson does not explicitly teach, but Cheney from same or similar field of endeavor teaches: “unencrypted email protocols (Cheney, Para [0068]: … Session authors are allowed to specify content for these elements to hide their email address. The benefit is to customize how other session authors can control direct traffic, such as reply and forward traffic, back to their email address. These fields offer an actual traffic benefit, but no security advantage. MML processors generally do not use, process, or regulate the <from> or <reply-to> tags for any intent related to security. Email transmission protocols operate in unencrypted plain-text by default, which is where any user can easily find any address or identity information. …) or unencrypted email header” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cheney into the combined teachings of Watson-Carson, because it discloses that, “Email transmission protocols operate in unencrypted plain-text by default, which is where any user can easily find any address or identity information (Cheney, Para [0068])”. Regarding Claim 35. This claim recites all the same or similar limitations as claim 27, and hence similarly rejected as claim 27. Claims 28 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20180248895 A1 to Watson et al. (hereinafter “Watson”) in view of Pub. No.: US 20190108355 A1 to Carson (hereinafter “Carson”), as applied to claim 21 above, and further in view of Pub. No.: US 20200067861 A1 to Leddy et al. (hereinafter “Leddy”). Regarding Claim 28. (New) The combination of Watson-Carson discloses system of claim 26, however it does not explicitly teach, but Leddy from same or similar field of endeavor teaches: “wherein the analyzer module is further configured to form a hypothesis relating to whether a cybersecurity breach has occurred and provide outputs of the one or more machine learning models to the artificial intelligence classifier when the hypothesis is resolved to thereby continually train the artificial intelligence classifier during its operational life or deployment to identify cybersecurity breaches (Leddy, Para [00136, 0139, 1659-1663]: … In some embodiments, the training module is configured to use machine learning techniques to perform training. For example, obtained messages/communications can be used as training/test data upon which authored rules are trained and refined. Various machine learning algorithms and techniques, such as support vector machines (SVMs), neural networks, etc. can be used to performing the training/updating … In some embodiments, machine learning techniques are used to perform the training. In some embodiments, the training progresses through a cycle of test (where the message that passed through the test process is determined to have training potential), enters a training phase, performs re-training, etc. … The TFIDF feature extraction, in one embodiment, obtains text files of a particular data format (referred to herein as the “ZapFraud” data format) and outputs TFIDF feature vectors. It first divides the given data into train and test datasets, and then builds n-gram feature list of train and test sets using n-gram feature extractor described above. Then it builds TFIDF feature vectors from n-gram feature vectors of train set. For TF weighting, it uses “double normalization K” scheme with the K value of 0.4. … This example process implements machine learning classifiers, such as: Logistic regression, random forest, SVM and naïve Bayes, respectively. The processes receive as input TFIDF training and test sets along with their labels as inputs and outputs predicted labels upon the given test set. It first trains the classifier using the given test set and its label, and then predicts the labels of the test set. Once the classifiers have been tested, they are connected to classify the input messages pipelined to the applicable ML classifiers. The email classifier, in one embodiment, uses the NLTK library written in Python and made by NLTK project (http://www.nitk.org/) and scikit-learn library written in Python and made by scikit-learn developers (http://scikit-learn.org/) … The machine learning classifier, in one embodiment, implements the following machine learning classifiers: Logistic regression, random forest, SVM and naïve Bayes, respectively. Each method corresponds to a single library included in sklearn library. The programs gets TFIDF train and test sets along with their labels as inputs and outputs predicted labels upon the given test set. It first trains the classifier using the given test set and its label, and then predicts the labels of the test set. The classifier can be used to verify selections made by other filters, such as the storyline filter, or can be used in addition to and independently of such filters …).” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Leddy into the combined teachings of Watson-Carson, because it discloses that, “The set of Scam Vectors is trained with a set of scam messages, and then pruned with a set of known good ‘Ham’ messages. Each Ham message is processed using a trained Aho-Corasick(A-C) graph, or a similar data structure used to identify phrases in a body of text. An ordered numeric vector, comprised of numeric scam rank values, is created for this Ham message, similar as to the training messages. This ordered numeric vector is then checked against the known Scam vectors for matching vector entries. If a good message matches a Scam vector, then the Scam vector is eliminated from the set of Scam Vectors. The accuracy of the system should improve as more scam messages are used to train the system and more Ham message are used to prune it (Leddy, Para [1465])”. Pertinent Prior Arts The following prior arts made of record and not relied upon are considered pertinent to applicant's disclosure. US 10740164 B1; Roy et al.: Roy discloses system that obtains a security assessment requirement from a user for surveillance of a plurality of application programming interfaces. The system may create a data corpus from the assessment data associated with the query. The system may create a sequence classification model from the data corpus. The system may identify a plurality of risk parameters and a plurality of risk mapping levels associated with the query. The system may create a risk profile for the plurality of application programming interfaces based on mapping the plurality of risk parameters to the plurality of risk mapping levels and the sequence classification model. The system may create a rectification corpus and a data rectification model comprising a plurality of remediations for automated healing of a risk identified by the risk profile. The system may generate a security assessment result for the resolution of the query. US 20050060295 A1; Gould et al.: Gould discloses A network data classifier statistically classifies received data at wire-speed by examining, in part, the payloads of packets in which such data are disposed and without having a priori knowledge of the classification of the data. The network data classifier includes a feature extractor that extract features from the packets it receives. Such features include, for example, textual or binary patterns within the data or profiling of the network traffic. The network data classifier further includes a statistical classifier that classifies the received data into one or more pre-defined categories using the numerical values representing the features extracted by the feature extractor. The statistical classifier may generate a probability distribution function for each of a multitude of classes for the received data. The data so classified are subsequently be processed by a policy engine. Depending on the policies, different categories may be treated differently. US 10394951 B1; Adeeko, Jr.; Michael Olufemi: Adeeko discloses A server that receives a rule-based digital document and automatically parses the documents into individual rules. A grammar analysis is applied to parse each individual rule into different elements that are categorized into different elements categories and a metric count is generated for the rule based on a total number of identified elements in the different categories. Information from an evidence database is processed to determine a count of the elements that are satisfied by the evidence information. A score is then automatically generated based on the number of satisfied metrics. In one embodiment, the grammar analysis may further determine types of supporting evidence that may be used to show compliance with the rule and basis types indicating sources of the supporting evidence. For example, in one embodiment, the supporting evidence and basis types may be automatically inferred from natural language processing of the rule even if not expressly stated. In another embodiment, the supporting evidence and basis types may be manually entered by an administrator via a user interface US 20060095968 A1; Portolani et al.: Portolani discloses An intrusion detection system (IDS) that is capable of identifying the source of traffic, filtering the traffic to classify it as either safe or suspect and then applying sophisticated detection techniques such as stateful pattern recognition, protocol parsing, heuristic detection or anomaly detection either singularly or in combination based on the traffic type. In a network environment, each traffic source is provided with at least one IDS sensor that is dedicated to monitoring a specific type of traffic such as RPC, HTTP, SMTP, DNS, or others. Traffic from each traffic source is filtered to remove known safe traffic to improve efficiency and increase accuracy by keeping each IDS sensor focused on a specific traffic type. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHABUB S AHMED whose telephone number is (571)272-0364. The examiner can normally be reached on 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached on 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHABUB S AHMED/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Jul 17, 2024
Response after Non-Final Action
Feb 21, 2025
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591864
METHODS AND SYSTEMS FOR THE EFFICIENT TRANSFER OF ENTITIES ON A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12574393
CYBER SECURITY SYSTEM UTILIZING INTERACTIONS BETWEEN DETECTED AND HYPOTHESIZE CYBER-INCIDENTS
2y 5m to grant Granted Mar 10, 2026
Patent 12574370
VERIFYING PARTY IDENTITIES FOR SECURE TRANSACTIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12563053
METHODS AND SYSTEMS FOR FRAUD DETECTION USING RELATIVE MOVEMENT OF FACIAL FEATURES
2y 5m to grant Granted Feb 24, 2026
Patent 12542662
APPARATUS AND METHOD FOR FEDERATED LEARNING BASED ON GROUP KEY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 289 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month