Prosecution Insights
Last updated: April 19, 2026
Application No. 18/772,021

STREAM-ADAPTABLE REGULARIZATION FOR MODELS

Non-Final OA §101§103§112§DP
Filed
Jul 12, 2024
Examiner
ALI, AFAQ
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
119 granted / 132 resolved
+32.2% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
156
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§101 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Claims 1-20 are pending Priority This application claims no priority. Therefore, the effective filing date of this application is 07/12/2024. Drawings Applicants’ drawings filed on 07/12/2024 has been inspected and it is in compliance with MPEP 608.02. Specification The specification filed on 07/12/2024 is acceptable for examination proceedings. Double Patenting No double patenting rejection required at the time of this office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-4, 6, 7, 10, 12, 13, and 15, recite the term “complete”. It is unclear what is meant by the term “complete” with regards to first complete and second complete. Examiner suggests omitting the term “complete” in the claim language. Claims 3-11 and 13-20 depend on claims 2 and 12. Therefore, they also inherit the rejection. Claims 4, 18, and 19 recite the limitation “broader data”. It is unclear what is meant by the term “broader”. For the purpose of examination Examiner is interpreting this limitation as simply “data”. Examiner suggests omitting the term “broader”. Appropriate correction is required. Claim 9 recites the limitation "the encoded representation". There is insufficient antecedent basis for this limitation in the claim. Furthermore, the claim recites of “a first encoded representation” and “a second encoded representation”. It is unclear if these representations are same as “first complete encoded representation” and “second complete encoded representation”. For the purpose of examination examiner is interpreting this limitation as “an encoded representation is a first encoded representation different from the first complete encoded representation … generating a second encoded representation different from the second complete encoded representation”. Appropriate correction is required. Claim 11 depends on claim 9. Therefore, it also inherits the rejection. Claim 14 recites the limitation “inputting filtered data into the encoder model comprises”. However, claim 12 already recites of “inputting, into the encoder model after the updating of the encoder model, filtered data”. For the purpose of examination Examiner is interpreting this limitation as “inputting the filtered data into the encoder model comprises”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because they directed to an abstract idea. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites of a system for generating dense vector representations of users based on filtered event stream data to detect malicious activity, the system comprising one or more memory devices programmed with instructions that, when executed by one or more processors, cause operations comprising: generating a first complete dense vector for a first user by inputting, into an encoder model, first unrestricted data provided by a first unrestricted event stream and restricted data provided by a restricted event stream; generating a second complete dense vector for a second user by inputting, into the encoder model, second unrestricted data provided by a second unrestricted event stream; generating a unrestricted dense vector by inputting, into the encoder model, the first unrestricted data without inputting the restricted data into the encoder model to generate the unrestricted dense vector; evaluating a loss function value by (i) increasing the loss function value based on a first similarity between the first complete dense vector and the unrestricted dense vector and (ii) decreasing the loss function value based on a second similarity between the first complete dense vector for the first user and the second complete dense vector for the second user; updating the encoder model by backpropagating the loss function value to update weights of the encoder model; generating a candidate reference vector mapped to the first user by inputting, into the encoder model after the updating of the weights, filtered data comprising new events from the first unrestricted event stream without data from the restricted event stream; and generating a malicious activity indicator by providing the candidate reference vector to a prediction model. The limitation of system for generating dense vector representations of users based on filtered event stream data to detect malicious activity, the system comprising one or more memory devices programmed with instructions that, when executed by one or more processors, cause operations comprising, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate dense vector representations of users based on filtered event stream data to detect malicious activity, without the need of a processor or memory. The limitation of generating a first complete dense vector for a first user by inputting, into an encoder model, first unrestricted data provided by a first unrestricted event stream and restricted data provided by a restricted event stream, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate a first complete dense vector for a first user by inputting, into an encoder model, first unrestricted data provided by a first unrestricted event stream and restricted data provided by a restricted event stream. The limitation of generating a second complete dense vector for a second user by inputting, into the encoder model, second unrestricted data provided by a second unrestricted event stream, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate a second complete dense vector for a second user by inputting, into the encoder model, second unrestricted data provided by a second unrestricted event stream. The limitation of generating a unrestricted dense vector by inputting, into the encoder model, the first unrestricted data without inputting the restricted data into the encoder model to generate the unrestricted dense vector, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate a unrestricted dense vector by inputting, into the encoder model, the first unrestricted data without inputting the restricted data into the encoder model to generate the unrestricted dense vector. The limitation of evaluating a loss function value by (i) increasing the loss function value based on a first similarity between the first complete dense vector and the unrestricted dense vector and (ii) decreasing the loss function value based on a second similarity between the first complete dense vector for the first user and the second complete dense vector for the second user, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually evaluate a loss function and by increasing and decreasing the loss function based on a similarity. The limitation of updating the encoder model by backpropagating the loss function value to update weights of the encoder model, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually update an encoder model by backpropagating the loss function value to update weights of the encoder model. The limitation of generating a candidate reference vector mapped to the first user by inputting, into the encoder model after the updating of the weights, filtered data comprising new events from the first unrestricted event stream without data from the restricted event stream, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate a candidate reference vector mapped to the first user by inputting, into the encoder model after the updating of the weights, filtered data comprising new events. The limitation of generating a malicious activity indicator by providing the candidate reference vector to a prediction model, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually generate a malicious activity indicator by providing the candidate reference vector to a prediction model. This judicial exception is not integrated into a practical application. The claim recites of a limitation of “generating a malicious activity indicator by providing the candidate reference vector to a prediction model”. This limitation is used to generally generate a malicious activity indicator. The limitation does not place any limit on what is the purpose or outcome of generating a malicious activity indicator. Merely generating an indicator does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In particular, the claim only recites one additional element of “system comprising one or more memory devices programmed with instructions that, when executed by one or more processors, cause operations” recited at a high-level of generality (i.e., as a generic processor implementing the system) such that it amounts no more than mere instructions to apply the exception using a generic processor. Mere instructions to apply an exception using a generic processor cannot provide an inventive concept. The claim is not patent eligible. Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Furthermore, claim 2 recites of a method claim that performs features similar to that of claim 1. Therefore, claim 2 is rejected in a similar manner as in the rejection of claim 1. Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the restricted data is first restricted data, wherein the restricted event stream is a first restricted event stream, further comprising: obtaining second restricted data provided by a second restricted event stream, wherein inputting the second unrestricted data into the encoder model comprises inputting the second unrestricted data and the second restricted data into the encoder model; generating a second unrestricted encoded representation by inputting, into the encoder model, the second unrestricted data without inputting the second restricted data into the encoder model to generate the second unrestricted encoded representation; and decreasing the loss function value based on a similarity between the first complete encoded representation and the second unrestricted encoded representation. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually obtain second restricted data provided by a second restricted event stream. Generate a second unrestricted encoded representation by inputting, into the encoder model, the second unrestricted data. And decrease the loss function value based on a similarity between the first complete encoded representation and the second unrestricted encoded representation. Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein generating the first complete encoded representation comprises randomly selecting a portion of broader data for use as the first unrestricted data. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually generate the first complete encoded representation by randomly selecting a portion of broader data. Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of selecting event data provided by the restricted event stream within a time range defined by the randomly selected portion for use as the restricted data. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually select event data within a time range defined by the randomly selected portion for use as the restricted data. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of determining the first similarity based on a latent space distance between the first complete encoded representation and the unrestricted encoded representation, wherein increasing the loss function value comprises increasing the loss function value based on the latent space distance. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine the first similarity based on a latent space distance between the first complete encoded representation and the unrestricted encoded representation and increasing the loss function value based on the latent space distance. Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of determining the second similarity between the first complete encoded representation and the second complete encoded representation based on a latent space distance between the first complete encoded representation and the second complete encoded representation, wherein decreasing the loss function value based on the second similarity between the first complete encoded representation and the second complete encoded representation comprises decreasing the loss function value based on the latent space distance. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine second similarity between the first complete encoded representation and the second complete encoded representation based on a latent space distance between the first complete encoded representation and the second complete encoded representation and decrease the loss function value based on the latent space distance. Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of obtaining data from plurality of event streams comprising the first unrestricted event stream and the restricted event stream; and determining that a candidate event stream of the plurality of event streams is the restricted event stream based on an identifier stored in a record provided by the candidate event stream. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually obtain data from plurality of event streams comprising the first unrestricted event stream and the restricted event stream and determine a candidate event stream of the plurality of event streams is the restricted event stream based on an identifier stored in a record provided by the candidate event stream. Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the encoded representation is a first encoded representation, further comprising: storing the first encoded representation in a database in association with a first user; generating a second encoded representation by inputting, into the encoder model, filtered data comprising the additional data and data from the restricted data; storing the second encoded representation in the database in association with the first user; and obtaining a request identifying the first user, wherein the request comprises an indicator of a request source, wherein providing the first encoded representation to the prediction model comprises selecting the first encoded representation in lieu of the second encoded representation to provide to the prediction model based on the indicator of the request source. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually store first encoded representation in a database in association with a first user, generate second encoded representation by inputting, into the encoder model, filtered data, store second encoded representation in the database in association with the first user, and obtain a request identifying the first user, wherein the request comprises an indicator of a request source, wherein providing the first encoded representation to the prediction model comprises selecting the first encoded representation. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of the unrestricted encoded representation is a first unrestricted encoded representation, the restricted event stream is a second restricted event stream, the loss function value is a first loss function value, the candidate encoded representation is a first encoded representation, further comprising: generating the first complete encoded representation comprises inputting, into the encoder model, second restricted data provided by the second restricted event stream; generating the first unrestricted encoded representation comprises inputting, into the encoder model, the second restricted data; generating a second unrestricted encoded representation by inputting, into the encoder model, the first unrestricted data and the second unrestricted data without inputting the second restricted data; increasing a second loss function value based on a similarity between the first complete encoded representation and the second unrestricted encoded representation; determining second weights for the encoder model by updating the encoder model based on the second loss function value; and generating a second encoded representation by inputting, into the encoder model configured with the second weights, filtered data comprising additional data from the first unrestricted event stream and without data from the second restricted data. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually generate first complete encoded representation, generate first unrestricted encoded representation, generate second unrestricted encoded representation, increase a second loss function value based on a similarity between the first complete encoded representation and the second unrestricted encoded representation, determine second weights for the encoder model, generate a second encoded representation by inputting, into the encoder model configured with the second weights, filtered data. Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of obtaining a query indicating a user associated with the first unrestricted event stream, wherein the query in association with a user account, wherein generating the indicator comprises selecting the first encoded representation in lieu of the second encoded representation based on an identifier associated with the query. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually obtain a query indicating a user associated with the first unrestricted event stream, wherein the query is in association with a user account. Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Furthermore, this claim recites of features similar to that of claims 1 and 2. Therefore, claim 12 is rejected in similar manner as in the rejection of claims 1 and 2. Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Furthermore, this claim recites of features similar to that of claim 3. Therefore, claim 13 is rejected in a similar manner as in the rejection of claim 3. Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of the operations further comprising obtaining a request identifying a first user associated with the first unrestricted data stream, wherein: the request is associated with an indicator of a request source, and inputting filtered data into the encoder model comprises determining that the filtered data should not comprise the data from the restricted data stream based on the indicator of the request source. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually obtain request identifying a first user associated with the first unrestricted data stream, the request associated with an indicator, and input filtered data into the encoder model comprises determining that the filtered data should not comprise the data from the restricted data stream based on the indicator. Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of the operations further comprising determining the similarity between the first complete encoded representation and the unrestricted encoded representation by computing a cosine similarity between the first complete encoded representation and the unrestricted encoded representation. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine similarity by computing a cosine similarity. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein generating the indicator based on the encoded representation comprises providing the encoded representation to a transformer neural network model. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually provide the encoded representation to a transformer neural network model. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of determining that a candidate data stream comprises a first identifier; and assigning the candidate data stream as the restricted data stream based on a detected match between the first identifier and a target identifier indicated in a database. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine a candidate data stream comprises a first identifier, and assign the candidate data stream as the restricted data stream. Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of filtering a broader data provided by the first unrestricted data stream with a set of filter criteria to determine the first unrestricted data. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually filter a broader data provided by the first unrestricted data stream with a set of filter criteria to determine the first unrestricted data. Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of randomly selecting a portion of broader data comprising the first unrestricted data and the restricted data; and determining at least one of the first unrestricted data or the restricted data based on the randomly selected portion. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually randomly select a portion of broader data comprising the first unrestricted data and the restricted data, and determine at least one of the first unrestricted data or the restricted data based on the randomly selected portion. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the restricted data stream indicates user interactions with a web application. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine restricted data stream indicates user interactions with a web application. The dependent claims 3-11 and 13-20 are directed to abstract ideas and do not include additional elements that are sufficient to amount to significantly more than the judicial exception. This judicial exception is not integrated into a practical application. Therefore, the claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 4-8, 12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over YOON (US-20250384346-A1) based on its priority to U.S. Provisional Application 63/660,448 having a filing date of Jun. 14, 2024, in view of PODGORNY (US-20210133581-A1), hereinafter YOON-PODGORNY. Regarding claim 2, YOON teaches “A method comprising: generating a first complete encoded representation by inputting, into an encoder model, first unrestricted data provided by a first unrestricted … stream and restricted data provided by a restricted … stream; ([YOON, para. 0084] “The computing system can receive a query based on the high-dimensionality embeddings. This query can originate from various sources, which can include a user interacting with an application, another computational process, and/or an automated system seeking information or performing a task. The query can be expressed in various forms, which can include natural language text, an image, an audio snippet, and/or another high-dimensionality data format.”) ([YOON, para. 0106, fig. 2] “FIG. 2 shows distinct loss functions that can be used within a dimensionality reduction model framework. In some embodiments, the loss functions can be used to train the dimensionality reduction model through unsupervised learning operations. FIG. 2 shows representations of embeddings 200 which comprise a similar embedding 202, a given embedding 212, and a random embedding 222. Each of the embeddings can be associated with a plurality of lower-dimensionality embeddings that have fewer dimensions. ”) ([YOON, para. 0107, fig. 2] “Further, the given embedding 212 can be associated with an embedding 214 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) ([YOON, para. 0160] “A data-to-sequence model 11-1 can process data from input modality 10-1 to project the data into a format compatible with input sequence 8 (e.g., one or more vectors dimensioned according to the dimensions of input sequence 8)”) ([YOON, para. 0079] “Text segments, including words, sentences, and/or entire documents, can be encoded into high-dimensionality word embeddings and/or document embeddings, capturing semantic relationships.”) [Examiner’s note: Examiner is interpreting the given embedding 212 as the first complete encoded representation. The given embedding includes embedding 202 that is a subset of embedding 212 and has fewer dimensions, embedding 212 is similar to the first complete representation comprising unrestricted data (embedding 202) and restricted data (data not in embedding 202).] generating a second complete encoded representation by inputting, into the encoder model, second unrestricted data provided by a second unrestricted event stream; ([YOON, para. 0108] “the random embedding 222 can be associated with an embedding 224 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) [Examiner’s note: Examiner is interpreting the random embedding 222 as the second complete representation. The random embedding includes subset of given embedding and therefore includes unrestricted data] generating an unrestricted encoded representation by inputting, into the encoder model, the first unrestricted data without inputting the restricted data into the encoder model to generate the unrestricted encoded representation; ([YOON, para. 0106] “similar embedding 202 can be associated with an embedding 204 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) [Examiner’s note: Examiner is interpreting similar embedding 202 as unrestricted representation. The embedding 202 has fewer dimensions and is subset therefore, it does not include the restricted data.] evaluating a loss function value by (i) increasing the loss function value based on a first similarity between the first complete encoded representation and the unrestricted encoded representation and (ii) decreasing the loss function value based on a second similarity between the first complete encoded representation and the second complete encoded representation; ([YOON, para. 0128] “At 508, the method 500 can include determining a loss based on the amount of similarity between the high-dimensionality training embeddings and the low-dimensionality training embeddings. For example, over a plurality of iterations, the machine-learning computing system 110 can determine a loss based on the amount of similarity between the high-dimensionality training embeddings and the low-dimensionality training embeddings.”) ([YOON, para. 0129] “At 510, the method 500 can include modifying based on the loss, a weighting of parameters of the dimensionality reduction model. A weighting of the parameters can be modified to minimize the loss. For example, the machine-learning computing system 110 can modify a plurality of weights of a plurality of parameters of the dimensionality reduction model such that the weights of the plurality of parameters that contribute to reducing the loss (e.g., the parameters that increase the accuracy of the dimensionality reduction model generating training output that is accurate) are increased and/or the weights of the plurality of parameters that contribute to increasing the loss (e.g., the parameters that decrease the accuracy of the dimensionality reduction model generating training output that is accurate) are decreased. The plurality of weights of the plurality of parameters can be modified until some threshold loss (e.g., a minimized loss) that corresponds to a high accuracy of the training output is exceeded.”) updating weights of the encoder model based on the loss function value; ([YOON, para. 0129] “The plurality of weights of the plurality of parameters can be modified until some threshold loss (e.g., a minimized loss) that corresponds to a high accuracy of the training output is exceeded.”) generating a candidate encoded representation by inputting, into the encoder model after the updating of the weights, filtered data comprising additional data from the first unrestricted event stream without data from the restricted event stream; and ([YOON, para. 0078] “By iteratively adjusting its internal parameters, the model can be configured and/or trained to perform the transformations that result in adapted dimensionality training embeddings that are highly similar to their original high-dimensionality counterparts, thus minimizing the loss. This iterative adjustment process, can be performed via optimization algorithms”) ([YOON, para. 0112] “The top-k loss function can measure how well the set of the top-k most similar embeddings to a given high-dimensionality embedding are retained within the top-k most similar embeddings of its corresponding low-dimensionality embedding.”) ([YOON, para. 0106] “the similar embedding 202 can be associated with an embedding 204 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) ([YOON, para. 0129] “The plurality of weights of the plurality of parameters can be modified until some threshold loss (e.g., a minimized loss) that corresponds to a high accuracy of the training output is exceeded.”) generating an indicator by providing the candidate encoded representation to a prediction model. ([YOON, para. 0144] “FIG. 8 is a block diagram of an example implementation of an example machine-learned model configured to process sequences of information. … Sequence processing model 4 can process input sequence 5 using prediction layer(s) 6 to generate an output sequence 7. Output sequence 7 can include one or more output elements 7-1, 7-2, . . . , 7-N, etc. generated based on input sequence 5. The system can generate output(s) 3 based on output sequence 7.”) ([YOON, para. 0152] “Prediction layer(s) 6 can evaluate associations between portions of input sequence 5 and a particular output element. These associations can inform a prediction of the likelihood that a particular output follows the input context. “) ([YOON, para. 0213] “As another example, machine-learned model(s) 1 can process the statistical data to generate a prediction output. As another example, machine-learned model(s) 1 can process the statistical data to generate a classification output. ”) ([YOON, para. 0099] “Embeddings processor 112 can obtain one or more outputs from the sequence processing model 132 in response to the one or more queries generated by query generator 130. Embeddings processor 112 can generate one or more query responses to the particular query based at least on the one or more outputs from the sequence processing model 132.”) However, YOON does not teach of “… event stream …”. In analogous teaching PODGORNY teaches “… event stream …” ([PODGORNY, para. 0002] “obtaining a clickstream associated with the interaction, and generating a clickstream embedding from the clickstream. The question embedding and the clickstream embedding form a shared latent space representation.”) ([PODGORNY, para. 0029] “In one or more embodiments, the application back-end (140) receives a user input provided by the user via the input interface (122) and generates a clickstream (146) from the user input. Broadly speaking, the clickstream (146) may be generated by the application back-end (140) as the user is navigating through the software application. “) ([PODGORNY, para. 0029] “The clickstream encoder (220), in one or more embodiments, processes the clickstream (206) to output the clickstream embedding (216).”). Thus, given the teaching of PODGORNY, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of event stream by PODGORNY into the teaching of a method to generate encoded representations by YOON. One of ordinary skill in the art would have been motivated to do so because PODGORNY recognizes the benefit of using multimodal information ([PODGORNY, para. 0001] “Reducing the average handle time (AHT) associated with resolving these problems by the support agents may help reduce expenses for providing user support and improve the overall support process.”) ([PODGORNY, para. 0002] “one or more embodiments relate to a method for facilitating user support using multimodal information. The method includes obtaining an interaction between a user and a support agent, generating a question embedding from the interaction, obtaining a clickstream associated with the interaction, and generating a clickstream embedding from the clickstream.”) Regarding claim 4, YOON-PODGORNY teach all limitations of claim 2. YOON further teaches “wherein generating the first complete encoded representation comprises randomly selecting a portion of broader data for use as the first unrestricted data. ([YOON, para. 0108] “Additionally, the random embedding 222 can be associated with an embedding 224 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) ([YOON, para. 0111] “The random embedding 222 and the associated embeddings 224-228 can be generated based on the use of top-k similarity loss operations on the given embedding 212.”) Regarding claim 5, YOON-PODGORNY teach all limitations of claim 4. PODGORNY further teaches “further comprising selecting event data provided by the restricted event stream within a time range defined by the randomly selected portion for use as the restricted data. ([PODGORNY, para. 0029] “A user activity may, thus, be documented by storing an identifier for the user activity in the clickstream. In combination, user activity gathered over time may establish a context that may help identify a problem that the user is experiencing. The level of detail of user activity documented in the clickstream may vary.”) ([PODGORNY, para. 0054] “the clickstream also includes measurements of time spent on the pages identified by the page IDs. The measurement may be obtained based on time stamps that are recorded as the user is interacting with the software application. The measurements of time may then be converted into categorical data, as described in the previous paragraph.”). The same motivation to modify YOON with PODGORNY as in the rejection of claim 2 applies. Regarding claim 6, YOON-PODGORNY teach all limitations of claim 2. YOON further teaches “further comprising determining the first similarity based on a latent space distance between the first complete encoded representation and the unrestricted encoded representation, wherein increasing the loss function value comprises increasing the loss function value based on the latent space distance. ([YOON, para. 0110] “Further, the pairwise similarity loss can measure the degree to which the relative distances or similarities between pairs of high-dimensionality embeddings are maintained in their low-dimensionality counterparts. For example, if two high-dimensionality document embeddings have a specific cosine similarity score, the pairwise loss can quantify how closely their corresponding low-dimensionality embeddings replicate that same similarity score. By minimizing this loss, the dimensionality reduction model can be configured and/or trained to learn to maintain a consistent mapping of global relationships, which can result in the overall structure of the embedding space being preserved.”) ([YOON, para. 0129] “For example, the machine-learning computing system 110 can modify a plurality of weights of a plurality of parameters of the dimensionality reduction model such that the weights of the plurality of parameters that contribute to reducing the loss (e.g., the parameters that increase the accuracy of the dimensionality reduction model generating training output that is accurate) are increased and/or the weights of the plurality of parameters that contribute to increasing the loss (e.g., the parameters that decrease the accuracy of the dimensionality reduction model generating training output that is accurate) are decreased.”) Regarding claim 7, YOON-PODGORNY teach all limitations of claim 2. YOON further teaches “further comprising determining the second similarity between the first complete encoded representation and the second complete encoded representation based on a latent space distance between the first complete encoded representation and the second complete encoded representation, wherein decreasing the loss function value based on the second similarity between the first complete encoded representation and the second complete encoded representation comprises decreasing the loss function value based on the latent space distance. ([YOON, para. 0082] “Determining the pairwise similarity loss can be based on performing operations to preserve the relative distances and/or similarities between pairs (e.g., all pairs) of embeddings (e.g., document embeddings).”) ([YOON, para. 0110] “Further, the pairwise similarity loss can measure the degree to which the relative distances or similarities between pairs of high-dimensionality embeddings are maintained in their low-dimensionality counterparts. For example, if two high-dimensionality document embeddings have a specific cosine similarity score, the pairwise loss can quantify how closely their corresponding low-dimensionality embeddings replicate that same similarity score. By minimizing this loss, the dimensionality reduction model can be configured and/or trained to learn to maintain a consistent mapping of global relationships, which can result in the overall structure of the embedding space being preserved.”) ([YOON, para. 0129] “For example, the machine-learning computing system 110 can modify a plurality of weights of a plurality of parameters of the dimensionality reduction model such that the weights of the plurality of parameters that contribute to reducing the loss (e.g., the parameters that increase the accuracy of the dimensionality reduction model generating training output that is accurate) are increased and/or the weights of the plurality of parameters that contribute to increasing the loss (e.g., the parameters that decrease the accuracy of the dimensionality reduction model generating training output that is accurate) are decreased.”) Regarding claim 8, YOON-PODGORNY teach all limitations of claim 2. YOON-PODGORNY teach of “further comprising: obtaining data from plurality of event streams comprising the first unrestricted event stream and the restricted event stream; and” as can be seen in the rejection of claim 2. The same rejection and motivation apply. PODGORNY further teaches “… determining that a candidate event stream of the plurality of event streams is the restricted event stream based on an identifier stored in a record provided by the candidate event stream. ([PODGORNY, para. 0029] “The clickstream (146) may document any type of interaction of the user with the software application. For example, the clickstream (146) may include a history of page clicks and/or text inputs performed by the user to track the user's interaction with the software application. A user activity may, thus, be documented by storing an identifier for the user activity in the clickstream. In combination, user activity gathered over time may establish a context that may help identify a problem that the user is experiencing.”) ([PODGORNY, para. 0103] “The extraction criteria may be as simple as an identifier string or may be a query provided to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).”). The same motivation to modify YOON with PODGORNY as in the rejection of claim 2 applies. Regarding claim 12, this claim recites of a One or more non-transitory, machine-readable media comprising program instructions that, when executed by one or more processors, performs the features of claim 2. Therefore, claim 12 is rejected in a similar manner as in the rejection of claim 2. Furthermore, Examiner is interpreting the data stream of claim 12 similar to the event stream of claim 2. The same rejection and motivation apply. Regarding claim 14, YOON-PODGORNY teach all limitations of claim 12. PODGORNY further teaches “the operations further comprising obtaining a request identifying a first user associated with the first unrestricted data stream, wherein: ([PODGORNY, para. 0029] “A user activity may, thus, be documented by storing an identifier for the user activity in the clickstream. In combination, user activity gathered over time may establish a context that may help identify a problem that the user is experiencing.”) ([PODGORNY, para. 0103] “Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). … The extraction criteria may be as simple as an identifier string or may be a query provided to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).”) the request is associated with an indicator of a request source, and ([PODGORNY, para. 0039] “the clickstream encoder (220), during training/updating is “forced” to provide a clickstream embedding in the shared latent space representation as dictated by the question embedding (212). Because, during the training/updating of the clickstream encoder (220), the user input encoder (210) is not allowed to update, the question embedding (212) generated by the user input encoder (210) is forced upon the clickstream encoder (220). As a result, the clickstream encoder (220) begins to produce clickstream embeddings (216) in the shared latent space representation”) ([PODGORNY, para. 0086] “The support agent may be able to gain a better understanding of the user's actual problem while spending less time on interacting with the user”) inputting filtered data into the encoder model comprises determining that the filtered data should not comprise the data from the restricted data stream based on the indicator of the request source. ([PODGORNY, para. 0038] “The clickstream encoder (220), in one or more embodiments, processes the clickstream (206) to output the clickstream embedding (216). Various algorithms may be used to obtain the clickstream embedding (216) from the clickstream (206).”) ([PODGORNY, para. 0038] “The problem decoder (200) includes a user input encoder (210), a clickstream encoder (220), a shared latent space representation (230), and a shared latent space decoder (240). In combination, these components may output a problem summary (242) from the modalities (202), provided as inputs. The modalities may include the interaction (204) between the user and the support agent, the clickstream (206), and optionally user attributes (208).”) ([PODGORNY, para. 0068] “the problem summary is provided to the support agent. The problem summary may also be included in a case documentation generated for the interaction between the user and the support agent.”). The same motivation to modify YOON with PODGORNY as in the rejection of claim 2 applies. Regarding claim 15, YOON-PODGORNY teach all limitations of claim 12. YOON further teaches “the operations further comprising determining the similarity between the first complete encoded representation and the unrestricted encoded representation by computing a cosine similarity between the first complete encoded representation and the unrestricted encoded representation.” ([YOON, para. 0036] “Determining the amount of similarity can comprise normalizing the vectors to unit length prior to determining the dot product, which can simplify the cosine similarity determination (e.g., calculation of the cosine similarity).”) ([YOON, para. 0036] “determining an amount of similarity between the high-dimensionality training embeddings and the low-dimensionality training embeddings can comprise determining a cosine similarity between the high-dimensionality embeddings and the low-dimensionality embeddings.”). Regarding claim 16, YOON-PODGORNY teach all limitations of claim 12. YOON further teaches “wherein generating the indicator based on the encoded representation comprises providing the encoded representation to a transformer neural network model.” ([YOON, para. 0153] “A transformer is an example architecture that can be used in prediction layer(s) 6. … A transformer block can include one or more attention layer(s) and one or more post-attention layer(s) (e.g., feedforward layer(s), such as a multi-layer perceptron).”) ([YOON, para. 0154] “transformer-based architectures. For example, recurrent neural networks (RNNs) and long short-term memory (LSTM) models can also be used, as well as convolutional neural networks (CNNs). In general, prediction layer(s) 6 can leverage various kinds of artificial neural networks that can understand or generate sequences of information.”) ([YOON, para. 0159] “For instance, a vision transformer block can pass latent state information to a multilayer perceptron that outputs a likely class value associated with an input image.”). Regarding claim 17, YOON-PODGORNY teach all limitations of claim 12. PODGORNY further teaches “determining that a candidate data stream comprises a first identifier; and ([PODGORNY, para. 0029] “A user activity may, thus, be documented by storing an identifier for the user activity in the clickstream. In combination, user activity gathered over time may establish a context that may help identify a problem that the user is experiencing.”) assigning the candidate data stream as the restricted data stream based on a detected match between the first identifier and a target identifier indicated in a database. ([PODGORNY, para. 0103] “The extraction criteria may be as simple as an identifier string or may be a query provided to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).”) ([PODGORNY, para. 0038] “The clickstream encoder (220), in one or more embodiments, processes the clickstream (206) to output the clickstream embedding (216). ”). The same motivation to modify YOON with PODGORNY as in the rejection of claim 2 applies. Regarding claim 18, YOON-PODGORNY teach all limitations of claim 12. YOON further teaches “further comprising filtering a broader data provided by the first unrestricted data stream with a set of filter criteria to determine the first unrestricted data.” ([YOON, para. 0106] “FIG. 2 shows representations of embeddings 200 which comprise a similar embedding 202 … the similar embedding 202 can be associated with an embedding 204 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) Regarding claim 19, YOON-PODGORNY teach all limitations of claim 12. YOON further teaches “randomly selecting a portion of broader data comprising the first unrestricted data and the restricted data; and ([YOON, para. 0106] “FIG. 2 shows representations of embeddings 200 which comprise a similar embedding 202, a given embedding 212, and a random embedding 222.”) ([YOON, para. 0108] “Additionally, the random embedding 222 can be associated with an embedding 224 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) determining at least one of the first unrestricted data or the restricted data based on the randomly selected portion. ([YOON, para. 0106] “ Each of the embeddings can be associated with a plurality of lower-dimensionality embeddings that have fewer dimensions. For example, the similar embedding 202 can be associated with an embedding 204 that includes a subset of the dimensions of the given embedding 212 and has fewer dimensions than the given embedding 212”) ([YOON, para. 0111] “The random embedding 222 and the associated embeddings 224-228 can be generated based on the use of top-k similarity loss operations on the given embedding 212. The top-k loss, denoted as Ltopk, can be configured to focus on preserving local similarity relationships among neighboring embeddings”) Regarding claim 20, YOON-PODGORNY teach all limitations of claim 12. PODGORNY further teaches “wherein the restricted data stream indicates user interactions with a web application.” ([PODGORNY, para. 0029] “Broadly speaking, the clickstream (146) may be generated by the application back-end (140) as the user is navigating through the software application. The clickstream (146) may document any type of interaction of the user with the software application. For example, the clickstream (146) may include a history of page clicks and/or text inputs performed by the user to track the user's interaction with the software application.”) ([PODGORNY, para. 0101] “For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL.”). The same motivation to modify YOON with PODGORNY as in the rejection of claim 2 applies. Allowable Subject Matter Claim 1 is considered as reciting allowable subject matter. A reason for allowance will be noted in a notice of allowance once all other rejections have been overcome. Claims 3, 9-11, and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and if they overcome all other rejections. Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. ZHENG (US-20220405580-A1): This prior art teaches of method and system for determining an access score is disclosed. The method includes receiving an access request to access a resource by a user device. Next, a user embedding is retrieved from an embedding table, the user embedding associated with a user identifier of the user device and providing a multidimensional data point that represents a context of a user identifier. The context may correspond to the user identifier appearing in previous access requests within temporal proximity to other access requests from a subset of other user devices among a plurality of user devices. The method then inputs the user embedding into a first machine learning model that is trained based at least in part on the embedding table. The first machine learning model subsequently outputs an access score that corresponds to a level of authenticity of authorizing the user device to access the resource. BIRRU (US-20240289609-A1): This prior art teaches of system and a method for training a neural network to detect one or more anomalies in an event data. The system comprises a processing arrangement, communicably coupled to a database configured to store the event data. Herein, the processing arrangement is configured to receive event data associated with a plurality of log events for a given time period, pre-process the received event data to generate refined event data, process the refined event data using an encoder architecture of the processing arrangement, to generate one or more event embeddings based on a first transformation model, and wherein the second encoder is configured to generate one or more contextual embeddings based on a second transformation model for each log event, and process the one or more contextual embeddings via at least one statistical technique to generate an embedding matrix to detect the one or more anomalies. HAJIMIRSADEGHI (US-20200076841-A1): This prior art teaches of contextual embedding of features of operational logs or network traffic for anomaly detection based on sequence prediction. In an embodiment, a computer has a predictive recurrent neural network (RNN) that detects an anomalous network flow. In an embodiment, an RNN contextually transcodes sparse feature vectors that represent log messages into dense feature vectors that may be predictive or used to generate predictive vectors. In an embodiment, graph embedding improves feature embedding of log traces. In an embodiment, a computer detects and feature-encodes independent traces from related log messages. These techniques may detect malicious activity by anomaly analysis of context-aware feature embeddings of network packet flows, log messages, and/or log traces. VARMA (US-20250202728-A1): This prior art teaches of an innovative approach for enhancing monitoring multiparty stream communication analysis and machine learning applications. The apparatus monitors a multiparty stream communication, wherein the multiparty stream communication includes a first and a second packet series to generate, retrieve, and compare data elements. Training data is filtered and categorized into specific sub-populations, increasing relevance to particular subjects of analysis. These tailored datasets optimize machine learning models, resulting in more precise comprehension assessment. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.A./ 03/17/2026 /AFAQ ALI/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585791
ENCRYPTED COMMUNICATION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572656
CONTROL FLOW INTEGRITY MONITORING BASED INSIGHTS
2y 5m to grant Granted Mar 10, 2026
Patent 12563050
TECHNIQUES FOR DETECTING CYBER-ATTACK SCANNERS
2y 5m to grant Granted Feb 24, 2026
Patent 12554828
MULTI-FACTOR AUTHENTICATION USING BLOCKCHAIN
2y 5m to grant Granted Feb 17, 2026
Patent 12549585
VULNERABILITY SCANNING OF HIDDEN NETWORK SYSTEMS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+12.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month