Prosecution Insights
Last updated: April 19, 2026
Application No. 18/761,204

MACHINE-LEARNING BASED SECURITY EVENT DETECTION

Non-Final OA §101§103§112
Filed
Jul 01, 2024
Examiner
LOZA, JANICE JOMARIE
Art Unit
3698
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stripe, Inc.
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
2y 6m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
1 granted / 8 resolved
-39.5% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 8/12/2024 is being considered by the examiner. Status of the Claims This is a non-final rejection prepared in response to U.S. Patent Application 18/761,204 filed on July 01, 2024. Claims 1-20 are pending. Claim Rejections - 35 USC § 112 Claims rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 10 and 17 recite the limitation “detecting a security event based at least in part on the generated encodings, wherein the detecting comprises: … determining the security event based at least in part on at least some of the classifications and in response to detecting the security event, adjusting an event processing rate.” It is unclear whether the “determining the security event…” constitutes the detecting step, “detecting” and “determining” are separate operations or something else. This inconsistency creates ambiguity as to when the detection occurs and what triggers the adjusting. Further, the limitation “based at least in part on at least some of the classifications” fails to specify which classifications, how many classifications or how are the classifications are used to determine the security event. Claims 3 and 12 recite the limitation "“wherein generating the encoding…”. Independent claims 1 and 10 do not recite “generating an encoding…” or any equivalent step. Therefore, there is insufficient antecedent basis for this limitation in the claims. It is unclear to one in the ordinary skill in the art which step of claim 1 and 10 is being further limited by this limitation. Claims 7, 15 and 19 recite the limitation “ wherein training the first ML model comprises…”. Independent claims 1, 10 and 17 do not recite “training the first ML model…” or any equivalent step. It is unclear whether the training step is part of the method recited in claim 1. Therefore, it would be unclear to one in the ordinary skill in the art which step of claim 1, 10 and 17 is being further limited by this limitation. Claims 8 and 20 recite the limitation “wherein training the second ML model comprises…”. Independent claims 1 and 17 do not recite “training the second ML model…” or any equivalent step. It is unclear whether the training step is part of the method recited in claim 1. Therefore, it would be unclear to one in the ordinary skill in the art which step of claim 1, 10 and 17 is being further limited by this limitation. Claims 2, 4-6, 9, 11, 13-14, 16 and 18 are also rejected as they depend from either claims 1 or 9. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-9 are directed to computer-implemented method (i.e., process). Claims 10-16 are directed to a system (i.e., machine, and manufacture). Claims 17-20 are directed to a computer-storage media (i.e., manufacture). Therefore, these claims fall within the four statutory categories of invention, and thus must be further analyzed at Step 2A to determine if the claims are directed to a judicial exception (See MPEP 2106.03, subsection II). Step 2A Prong One: Claim 1, recites (i.e., sets forth or describes) an abstract idea. More specifically, the following bolded claim elements recite abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). A computer-implemented method comprising: obtaining a plurality of event data items corresponding to a plurality of events; processing each event data item of the plurality of event data items using a first machine learning (ML) model to generate an encoding for each corresponding event data item; detecting a security event based at least in part on the generated encodings, wherein the detecting comprises: processing each encoding for each event data item using a second ML model to generate a classification of whether the event is fraudulent; and determining the security event based at least in part on at least some of the classifications; and in response to detecting the security event, adjusting an event processing rate. Claim 1, recites (i.e., sets forth or describes) a method for adjusting an event processing rate in response to detecting the security event. The claim achieves this by obtaining different event data, processing the event data using mathematical models, detecting a security event based on the results of the processing. Claim 10 and 17 are significantly similar to claim 1. As such claim 10 and 17 also recite an abstract idea. Specifically, but for the additional elements, the claim under its broadest reasonable interpretation recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts”. Step 2A Prong Two: Because the claim recites abstract ideas, the analysis proceeds to determine whether the claim recites additional elements that recite a practical application of the abstract ideas. Here, the additional elements of one or more processors, a memory, a non-transitory computer-readable storage medium, a first machine learning (ML) model and a second ML model merely serve as tools to perform the abstract idea (MPEP § 2106.05(f)). Therefore, the claim as a whole fail to recite a practical application of the abstract ideas. Step 2B: Determines whether the claim as a whole amount to significantly more than the exception itself. Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself. Here, the additional elements, taken individually and in combination, do not result in the claim as a whole, amounting to significantly more than the judicial exception. As discussed previously with respect to Step 2A, the additional elements merely serve as a tool to perform an abstract idea. Thus, there is no inventive concept in the claim and thus the claim is not eligible, warranting a rejection for lack of subject matter eligibility and concluding the eligibility analysis. Dependent Claims: Claims 2-9, 11-16 and 18-20 have also been analyzed for subject matter eligibility. However, claims 2-9, 11-16 and 18-20 also fail to recite patent eligible subject matter for the following reasons: Claims 2 and 11 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). each event data item comprises a first set of features associated with the corresponding event. The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” and “mental processes” grouping of abstract ideas. Claims 3 and 12 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). generating the encoding for each corresponding event data item comprises: processing the first set of features using the first ML model to generate a second set of predicted features, wherein the first set of features and the second set of predicted features are mutually exclusive; and processing the first set of features and the second set of predicted features using the first ML model to generate the encoding. The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts” grouping of abstract ideas. The non-bolded additional elements of a first ML model fails to recite a practical application or significantly more than the abstract idea because it merely serves as a tool to perform the abstract idea (MPEP §2106.05(f)). Further, the additional elements, taken individually and in combination, do not result in the claim as a whole, amounting to significantly more than the judicial exception. Thus, there is no inventive concept in the claim and thus the claim is not eligible, warranting a rejection for lack of subject matter eligibility and concluding the eligibility analysis. Claims 4, 13 and 18 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). each event is associated with a timestamp, and wherein the classification of a subsequent event is based on the encoding generated for a prior event of the plurality of events based on timestamps associated with the corresponding event. The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts” grouping of abstract ideas. Claims 5 and 14 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). determining the security event comprises determining a percentage of some of the events that were classified as fraudulent. The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts” grouping of abstract ideas. Claim 6 recites the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). determining the security event comprises determining whether the percentage of the events that were classified as fraudulent is more than a threshold limit. The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts” grouping of abstract ideas. Claims 7, 15 and 19 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). the first ML model is a generative model and wherein training the first ML model comprises: preparing an event dataset comprising of a plurality of training samples, wherein each training sample comprises a first set of training features and a second set of training features, and wherein each training sample is associated with a training timestamp and a training label indicating whether the training sample corresponds to a fraudulent event; processing the first set of training features using the first ML model to generate a second set of predicted training features; processing the first set of training features and the second set of predicted training features using the first ML model to generate a first encoding; processing the first set of training features and the second set of training features using the first ML model to generate a second encoding; and adjusting a plurality of parameters of the first ML model based on the first encoding and the second encoding The non-bolded additional elements of a first ML model fails to recite a practical application or significantly more than the abstract idea because it merely serves as a tool to perform the abstract idea (MPEP §2106.05(f)). Further, the additional elements, taken individually and in combination, do not result in the claim as a whole, amounting to significantly more than the judicial exception. Thus, there is no inventive concept in the claim and thus the claim is not eligible, warranting a rejection for lack of subject matter eligibility and concluding the eligibility analysis. Claims 8 and 20 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). the second ML is a classification model and wherein training the second ML model comprises: processing the first set of training features and the second set of predicted training features using the first ML model to generate a third encoding; processing the third encoding using a second ML model to generate a predicted classification indicating whether the training sample corresponds to a fraudulent event; and adjusting a plurality of parameters of the second ML model based on the predicted classification and the training label associated with the training sample The non-bolded additional elements of a first ML model fails to recite a practical application or significantly more than the abstract idea because it merely serves as a tool to perform the abstract idea (MPEP §2106.05(f)). Further, the additional elements, taken individually and in combination, do not result in the claim as a whole, amounting to significantly more than the judicial exception. Thus, there is no inventive concept in the claim and thus the claim is not eligible, warranting a rejection for lack of subject matter eligibility and concluding the eligibility analysis. Claims 9 and 16 recite the following bolded claim elements as abstract ideas while the non-bolded claim elements recite additional elements according to MPEP 2106.04(a). detecting the security event comprises: processing each encoding for each event data item using a second ML model to generate a respective vector representation; selecting a latent vector for each vector representation based on a distance between the respective vector representations and the selected latent vector; determining an entropy for each of the selected latent vectors; and determining the security event based at least in part on the entropy determined for at least one of the selected latent vectors The claim further recites an abstract idea. In other words, it recites limitations grouped within the “certain methods of organizing human activity” grouping of abstract ideas (i.e., commercial or legal interactions), “mental processes” and “mathematical concepts” grouping of abstract ideas. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 9-11, 13 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Khatri (US 20250378310 A1) in view of Murphy (US 20250350628 A1). Regarding claims 1, 10 and 17, Khatri discloses: obtaining a plurality of event data items corresponding to a plurality of events; (abstract, obtaining a dataset of input samples, one of said input samples comprising a number of input data features. ¶0102, According to one or more example embodiments, the apparatus 100 is configured to obtain a training dataset of input samples, one of said input samples comprising a number of input data features… ) processing each event data item of the plurality of event data items using a first machine learning (ML) model to generate an encoding for each corresponding event data item; (abstract, applying said input samples to a machine learning system comprising a first machine learning model, or encoder, configured to output encoded samples… ¶0102,… apply input samples from a training data set, said input samples comprising a number of input data features to get encoded samples from a first machine learning model or encoder as output…) processing each encoding for each event data item using a second ML model to generate a classification of whether the event is fraudulent; and (abstract, applying said output encoded samples to a second machine learning model, or decoder, of said machine learning system. ¶0102,…applying said output encoded samples to a second machine learning model, or decoder, of said machine learning system, configured to produce reconstructed input samples from said encoded samples…) Khatri further discloses: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors to perform operations (¶0044, According to a fourth aspect, an apparatus comprises: [0045] at least one processor; [0046] at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus to perform the method) A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system, the one or more programs including instructions (¶0073, According to a ninth aspect, a non-transitory computer-readable medium comprises program instructions stored thereon for causing a computer to perform the method according to the first aspect or the method…) Khatri does not disclose, however Murphy teaches: detecting a security event based at least in part on the generated encodings, wherein the detecting comprises: … determining the security event based at least in part on at least some of the classifications; and in response to detecting the security event, adjusting an event processing rate. (¶0073, When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. ¶0074, As discussed above, threat mitigation process 10 may include AI/ML process 56 (e.g., an artificial intelligence/machine learning process) that may be configured to process information (e.g., information 58), wherein examples of information 58 may include but are not limited to platform information (e.g., structured or unstructured content) that may be scanned to detect security events (e.g., access auditing; anomalies; authentication; denial of services; exploitation; malware; phishing; spamming; reconnaissance; and/or web attack) within a monitored computing platform (e.g., computing platform 60). ¶0464, Threat mitigation process 10 may train 2016 the agent (e.g., agent 300) to proactively monitor activity within a computing platform (e.g., computing platform 60) and generate an initial notification (e.g., initial notification 298) if a security event is detected based, at least in part, upon best practices defined via artificial intelligence (e.g., AI/ML, process 56).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify Khatri’s teaching with Murphy’s teaching. One of ordinary skills in the art would have been motivated in order to reduce processing of fraudulent events and protect the system. Furthermore, the claimed limitation “to…” in “processing each event data item of the plurality of event data items using a first machine learning (ML) model to generate an encoding for each corresponding event data item” and in “processing each encoding for each event data item using a second ML model to generate a classification of whether the event is fraudulent” consist of language disclosing an intended use, so it is considered but given no patentable weight. (see MPEP 2111.05, MPEP 2114 and authorities cited therein). The reference is provided for the purpose of compact prosecution. Regarding claims 2 and 11, Khatri further discloses: each event data item comprises a first set of features associated with the corresponding event. (¶0075, obtaining encoded samples by applying input samples from a data set, said input samples comprising a number of input data features as input to a first machine learning model, or encoder, of a machine learning system…) Further, the claimed limitation recites “… comprises a first set of features associated with the corresponding event” is non-functional material that does not move to distinguish over prior art. Regarding claims 4, 13 and 18, Khatri further discloses: each event is associated with a timestamp, and wherein the classification of a subsequent event is based on the encoding generated for a prior event of the plurality of events based on timestamps associated with the corresponding event. (¶0207, An illustrative and non-limitative list of flow-based features is given below: [0208] IP Addresses. [0209] Ports [0210] Protocol [0211] Packet Count: The number of packets exchanged in the flow. [0212] Byte Count: The total number of bytes exchanged in the flow. [0213] Start Time: The timestamp indicating when the flow started. [0214] End Time: The timestamp indicating when the flow ended. [0215] Duration: The time duration of the flow (End Time-Start Time).) Further, the claimed limitation “…associated with a timestamp” is non-functional material that does not move to distinguish over prior art. Also the “…wherein” clause does not have any patentable weight as it does not recite any further structure to the system claim or impact how the steps of “processing…, detecting…, processing… and determining…” are performed in the method claim. Regarding claims 9 and 16, Khatri further discloses: detecting the security event comprises: processing each encoding for each event data item using a second ML model to generate a respective vector representation; selecting a latent vector for each vector representation based on a distance between the respective vector representations and the selected latent vector; determining an entropy for each of the selected latent vectors; and determining the security event based at least in part on the entropy determined for at least one of the selected latent vectors. (abstract, applying said output encoded samples to a second machine learning model, or decoder, of said machine learning system. ¶0035, determining a reconstruction loss based on a difference between the input samples and the reconstructed samples. ¶0105, Back to FIG. 1, the computing device, CD further comprises an apparatus 200 configured to use the trained machine learning system MLS, AE to output a latent representation of the input data and the clustering module CLS to produce a plurality of separate and homogeneous clusters as output. ¶0110, The decoder DEC is configured to take the latent representation Z provided by the encoder ENC for a given input sample X and to output reconstructed input data x. Similar to the encoder ENC, the decoder DEC typically consists of hidden layers that transform the latent representation back into the original data space. ¶0111, Common loss functions used for autoencoders include mean squared error (MSE) or binary cross-entropy, depending on the nature of the input data. ¶0133, In a step 34, a Clustering Loss CLL is determined. ¶0145, In Equation (3) and as defined above, ai is the mean intra-cluster distance between the latent features of the encoded sample zi corresponding to each data point xi of the input dataset belonging to the cluster kl and the latent features of the rest of the data points within the same cluster kl.) Further, the claimed limitation “to…” in “processing each encoding for each event data item using a second ML model to generate a respective vector representation” consist of language disclosing an intended use, so it is considered but given no patentable weight. (see MPEP 2111.05, MPEP 2114 and authorities cited therein). The reference is provided for the purpose of compact prosecution. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Khatri and Murphy as applied to claims 2 and 11 above, in further view of Baker (US 11,887,367 B1). Regarding claims 3 and 12, the combination of Khatri and Murphy do not disclose, however Baker discloses: generating the encoding for each corresponding event data item comprises: processing the first set of features using the first ML model to generate a second set of predicted features, wherein the first set of features and the second set of predicted features are mutually exclusive; and (Col 5 lines 47-48, Method 100 may include a step 110 of receiving unlabeled digital data. Col 6 lines 15-26, In some embodiments, method 100 may include a step 120 of generating pseudo-labels for the unlabeled digital video data. A pseudo-label, as used herein, may refer to a digital marking, metadata, a tag, or other data provided in electronic form that indicates an attribute of digital video data (e.g., one or more digital video frames). In some embodiments, generating pseudo-labels for the unlabeled digital video data may include receiving labeled digital video data, training a machine learning model using the labeled digital video data, and/or generating at least one pseudo-label for the unlabeled digital data, for example as described below with respect to FIG. 2.) processing the first set of features and the second set of predicted features using the first ML model to generate the encoding. (Col 6 lines 25-27, In some embodiments, method 100 may include a step 130 of adding the at least one pseudo-label to the unlabeled digital data, thereby converting the unlabeled digital data into pseudo-labeled digital data. Col 6 lines 62-65, In some embodiments, method 100 may include a step 140 of further training the machine learning model or another machine learning model using the pseudo-labeled digital video data.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri and Murphy with Baker’s teaching. One of ordinary skills in the art would have been motivated in order to improve robustness and handle and increase accuracy. Further, the claimed limitation “to…” in “processing the first set of features using the first ML model to generate a second set of predicted features…” and “processing the first set of features and the second set of predicted features using the first ML model to generate the encoding” consist of language disclosing an intended use, so it is considered but given no patentable weight. (see MPEP 2111.05, MPEP 2114 and authorities cited therein). The reference is provided for the purpose of compact prosecution. Claims 5, 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Khatri and Murphy as applied to claims 1 and 10 above, in further view of Magi (US 9842334 B1). Regarding claims 5 and 14, the combination of Khatri and Murphy do not disclose, however Magi discloses: determining the security event comprises determining a percentage of some of the events that were classified as fraudulent. (col 8 lines 12-15, Processor 36 can then determine the riskiness of the geographical area 50(5) by computing the amount/percentage of previous transactions that processor 36 assumed to be indicative of fraud or risk in the geographical area 50(5). Col 8 lines 64-67, Processor 36 can then determine the riskiness of the geographical area 50(5) by computing the percentage of previous transactions that are assumed to be indicative of fraud or risk in the geographical area 50(5).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri and Murphy with Magi’s teaching. One of ordinary skills in the art would have been motivated in order to provide a metric that enables reliable detection of fraudulent events. Regarding claims 6, the combination of Khatri, Murphy and Magi further discloses: determining the security event comprises determining whether the percentage of the events that were classified as fraudulent is more than a threshold limit. (Magi col 9 lines 1-4, he percentage of risky transactions in this example is sixty percent. It should be understood that a percentage in excess of a threshold such as ten percent can deem the area as a high risk area. Col 9 lines 34-38, In the event that the risk score is included in the field, the processor 36 can determine that the authentication result is equivalent to a failed authentication in response to the risk score in the field not exceeding a threshold.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri, Murphy and Magi with Magi’s additional teaching. One of ordinary skills in the art would have been motivated in order to provide a reliable mechanism for triggering security responses. Claims 7-8, 15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Khatri and Murphy as applied to claims 1, 10 and 17 above, in further view of Baker (US 11,887,367 B1) in further view of Krishnan (US 20220131975 A1). Regarding claims 7, 15 and 19, the combination of Khatri and Murphy further disclose: the first ML model is a generative model and wherein… (Khatri ¶0095, In the following, an unsupervised approach is proposed based on a neural network architecture called autoencoder, to solve a co-optimization problem which simultaneously involves minimizing the input data reconstruction loss while encouraging the creation of homogeneous and separate clusters. ¶0100, The computing device CD comprises a machine learning system MLS, for instance an auto-encoder AE. Indeed, an auto-encoder is a type of artificial neural network architecture that is well suited to unsupervised learning, particularly to dimensionality reduction, feature learning, and data compression. ¶0109, The encoder ENC implements a first machine learning model and the decoder DEC a second machine learning model, that are both neural networks. adjusting a plurality of parameters of the first ML model based on the first encoding and the second encoding. (Khatri ¶0116, MLPs are trained using techniques like backpropagation, where the neural network adjusts the weights of the connections in order to minimize a difference between its predictions and a target output, here between the reconstructed data and the original input data. They are commonly used for tasks such as classification and regression in machine learning where the input features are typically represented as vectors.) Murphy further teaches: training the first ML model comprises: preparing an event dataset comprising of a plurality of training samples, wherein each training sample comprises a first set of training features and a second set of training features, and wherein each training sample is associated with a training timestamp and a training label indicating whether the training sample corresponds to a fraudulent event; (Murphy ¶0070, Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. This raw data may then be preprocessed to clean and normalize it, followed by feature extraction, wherein relevant characteristics may be identified (e.g., access times, login frequencies, the volume and destination of data transfers, protocol usage, and command sequences). ¶0071, Machine learning models may be trained using this structured data. In supervised learning, the system is fed labeled data that indicates which actions are benign and which are malicious, allowing AI/ML process 56 to learn how to distinguish between them. ¶0468, below is an example of such a JSON portion: [0469] { [0470] “timestamp”: 1676573073400, [0471] “formatVersion”: 1, [0472] “webaclld”:…”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri and Murphy with Murphy’s additional teaching. One of ordinary skills in the art would have been motivated in order to enable the model to patterns over time and different trends. The combination of Khatri and Murphy do not disclose, however Baker teaches: processing the first set of training features using the first ML model to generate a second set of predicted training features; (Col 5 lines 47-48, Method 100 may include a step 110 of receiving unlabeled digital data. Col 6 lines 15-26, In some embodiments, method 100 may include a step 120 of generating pseudo-labels for the unlabeled digital video data. A pseudo-label, as used herein, may refer to a digital marking, metadata, a tag, or other data provided in electronic form that indicates an attribute of digital video data (e.g., one or more digital video frames). In some embodiments, generating pseudo-labels for the unlabeled digital video data may include receiving labeled digital video data, training a machine learning model using the labeled digital video data, and/or generating at least one pseudo-label for the unlabeled digital data, for example as described below with respect to FIG. 2.) processing the first set of training features and the second set of predicted training features using the first ML model to generate a first encoding; (Col 6 lines 25-27, In some embodiments, method 100 may include a step 130 of adding the at least one pseudo-label to the unlabeled digital data, thereby converting the unlabeled digital data into pseudo-labeled digital data. Col 6 lines 62-65, In some embodiments, method 100 may include a step 140 of further training the machine learning model or another machine learning model using the pseudo-labeled digital video data.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri and Murphy with Baker’s teaching. One of ordinary skills in the art would have been motivated in order to improve robustness and handle and increase accuracy. The combination of Khatri, Murphy and Baker do not disclose, however Krishnan teaches: processing the first set of training features and the second set of training features using the first ML model to generate a second encoding; and (¶0011, The call features and metadata features are combined and input into a trained machine learning (ML) model, which is trained to receive such combined features, and based on such combined features, the ML model is trained to generate an output… ¶0036, The method 200 proceeds to step 212, at which the method 200 inputs the combined features from step 210 to trained ML model(s). In some embodiments, the FEM 128 inputs the combined features to the ML engine 132, and at step 214, the method generates, using the ML model(s) of the ML engine 132, an output indicating customer satisfaction.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri, Murphy and Baker with Krishnan’s teaching. One of ordinary skills in the art would have been motivated in order to improve the performance and accuracy of the ML model. Further, the claimed limitation “…is a generative model” is non-functional material that does not move to distinguish over prior art. Also the “…wherein” clause does not have any patentable weight as it does not recite any further structure to the system claim or impact how the steps of “processing…, detecting…, processing… and determining…” are performed in the method claim. Regarding claims 8 and 20, the combination of Khatri, Murphy, Baker and Krishnan further disclose: the second ML is a classification model and wherein training the second ML model comprises: (Khatri ¶0008, applying said encoded samples to a second machine learning model, or decoder, of said machine learning system… ¶0109, The encoder ENC implements a first machine learning model and the decoder DEC a second machine learning. ¶0115, In the following examples, a fully-connected autoencoder is used whose encoder and decoder are Multilayer Perceptrons, MLP, is a type of artificial neural network composed of multiple layers of nodes (neurons) arranged in a feedforward manner.) processing the third encoding using a second ML model to generate a predicted classification indicating whether the training sample corresponds to a fraudulent event; and (Khatri abstract, applying said output encoded samples to a second machine learning model, or decoder, of said machine learning system) adjusting a plurality of parameters of the second ML model based on the predicted classification and the training label associated with the training sample. (Khatri ¶0116, MLPs are trained using techniques like backpropagation, where the neural network adjusts the weights of the connections in order to minimize a difference between its predictions and a target output, here between the reconstructed data and the original input data. They are commonly used for tasks such as classification and regression in machine learning where the input features are typically represented as vectors.) Baker further teaches: processing the first set of training features and the second set of predicted training features using the first ML model to generate a third encoding; (col 6 lines 25-27, In some embodiments, method 100 may include a step 130 of adding the at least one pseudo-label to the unlabeled digital data, thereby converting the unlabeled digital data into pseudo-labeled digital data. Col 6 lines 62-65, In some embodiments, method 100 may include a step 140 of further training the machine learning model or another machine learning model using the pseudo-labeled digital video data.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modify the combination of Khatri, Murphy, Baker and Krishnan with Baker’s teaching. One of ordinary skills in the art would have been motivated in order to improve robustness and handle and increase accuracy. Further, the claimed limitation “…is a classification model” is non-functional material that does not move to distinguish over prior art. Also the “…wherein” clause does not have any patentable weight as it does not recite any further structure to the system claim or impact how the steps of “processing…, detecting…, processing… and determining…” are performed in the method claim. Conclusion The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20240203604 A1 to Singh discloses: In implementations of systems for estimating effects with latent representations, a computing device implements an estimation system to receive input data via a network describing interactions of client devices included in a group of client devices. The estimation system generates a first latent vector representation of a first segment of the client devices and a second latent vector representation of a second segment of the client devices using an encoder of a machine learning model. A change vector is computed based on a difference between the first latent vector representation and the second latent vector representation in a latent space of the machine learning model. The estimation system generates an indication of an effect of a treatment on a third segment of the client devices based on the change vector using a decoder of the machine learning model. US 11797999 B1 to Mantin discloses: The transaction data or the historical transaction data comprises associated merchant identifiers, payment identifiers, timestamps of the transactions, statuses of the transactions and error codes associated with the transactions. The features comprise number of transactions, number of distinct payment identifier, deltas between timestamps of consecutive transactions, number of transactions associated with a particular error code or number of distinct payment identifiers whose associated transactions are associated with the particular error code. The classification model comprises a rule-based decision tree, a random forest, a logistic regression model, a support vector machine, a neural network, a gradient-boosted tree classifier or a Gaussian Naive Bayes classifier. US 20250165864 A1 to Ghosh discloses: Methods and systems for re-training a Machine Learning (ML) model using predicted features from a training dataset are disclosed. A method performed by a server system includes accessing a training feature set and a testing feature set from a database. In response to identifying an inclusion of at least one new feature in the testing feature set, the method includes training a surrogate ML model to predict a value for the new feature based on the testing feature set and determining, by the surrogate ML model, a predicted value for the new feature for each training data sample in a training dataset based on the training feature set. The method further includes generating a new training feature set for each training data sample based on the predicted value and the training feature set. The method includes re-training the ML model based on the new training feature for each data sample. US 20250272553 A1 to Bahrami discloses: Operations may include identifying features corresponding to a dataset. An embedding for each feature may be obtained using a pretrained generative artificial intelligence model. Pair comparisons of the embeddings may be generated. An encoded dataset may be generated by applying, to the pair comparisons, weights computed using the pretrained generative artificial intelligence model. The weights may indicate correlation between features in the pair comparisons. US 20240420010 A1 to Cirulis discloses: An example operation may include one or more of querying data of a merchant read from a point of sale (POS) system of the merchant and converting the data into an encoding, executing a machine learning model on the input encoding to generate a vector that comprises vectorized values corresponding to latent features of the merchant embedded within slots of the vector, respectively, generating an entry comprising an identifier of the merchant, context of the merchant, and the generated vector, and storing the entry in the feature store. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANICE LOZA whose telephone number is (571)270-3979. The examiner can normally be reached Monday - Friday 7:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached at (571) 272-7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.L./Examiner, Art Unit 3698 /STEVEN S KIM/Primary Examiner, Art Unit 3698
Read full office action

Prosecution Timeline

Jul 01, 2024
Application Filed
Oct 16, 2024
Response after Non-Final Action
Jan 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12387262
LOCALIZATION CONTROL FOR NON-FUNGIBLE TOKENS (NFTS) VIA TRANSFER BY CONTAINERIZED DATA STRUCTURES
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
62%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month