Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgments
The submission filed on 12/30/2025 is acknowledged.
Status of Claims
Claims 1-20 are pending.
In the Amendment filed on 12/30/2025, claims 1, 10, 11, 19 and 20 were amended, and no claims were cancelled or added
Claims 1-20 are rejected.
Response to Arguments
Regarding the rejections under 35 U.S.C. 101
Applicant’s arguments have been fully considered but are not persuasive.
The Office responds to Applicant’s arguments below. In the discussion below, section numbers and page numbers refer to Applicant’s Response, unless otherwise indicated, and paragraph numbers refer to Applicant’s specification, unless otherwise indicated.
In section I., Applicant argues that the claims provide a technical improvement, citing from 0001-0007 of the specification. Pp. 8-9.
The Examiner agrees that the portions of claim 1 highlighted by Applicant, namely, the detect, obtain and generate steps, are what is pertinent in this regard. Pp. 8-9.
As for these steps, as best understood, the alleged improvement reflected in claim 1 is that (1) a condition is detected (detect step); (2) data is obtained selectively, in other words filtered, rather than unselectively or unfiltered, specifically, events are obtained "based on a time the condition is detected" rather than indiscriminately (obtain step); and (3) a data structure (for conversion to a feature vector to be used as input data to an ML model) is created selectively, i.e., based on the condition, rather than indiscriminately (generate step).
This selective data gathering and data processing, based on detection of a condition, is a matter of the abstract idea of fraud detection. It has the advantage of conserving resources that would otherwise be expended on a more voluminous data collection and processing. This conservation of resources, i.e., data volume reduction, of course, reduces the loads in terms of memory and bandwidth that are imposed on the computers used in the operations. This beneficial outcome does not involve any change in the computers or their functioning or in any other technology. The computers/ technology are still used in the same ordinary capacity as they would be absent this data volume reduction.
Nor does this beneficial outcome amount to any interaction between the additional elements (computers/technology) and the abstract idea that is different than the interaction that occurs between them in the alleged prior art scenario where there is no such reduction in volume of data. MPEP 2106.04(d) III. In either case, the computer is merely applied to the data collected in the abstract idea process. Note the computer is not described and is recited at a high level of generality ("one or more processors coupled to non-transitory memory, the one or more processors configured to:"). No improvement in any technical process, nor in how the data structure is created, is seen in claim 1.
As part of arguing the alleged improvement, Applicant purports to distinguish the claimed process from prior art techniques, such as "complex and time-consuming feature retrieval processes, relying on offline or asynchronous feature generation techniques to provide input data for feature processing" and "upstream event joining operations," citing the specification. Pp. 8-9; 0003, 0006.
However, claim 1 is not seen to include these distinctions. For example, claim 1 does not exclude "offline or asynchronous feature generation techniques" or "upstream event joining operations," nor does claim 1 require the opposites of these putative prior art techniques, e.g., online or synchronous feature generation techniques or downstream event joining operations.
In sum, no improvement in computer functioning/technology is found in claim 1.
In section II., Applicant argues that the claims are analogous to BASCOM. Pp. 9-10. The subject matter of the instant claims is different from that of the eligible claims of BASCOM, and Applicant has not explained how they are analogous. Applicant states:
The technology of BASCOM relied on "the installation of a filtering tool at a specific location, remote from the end-users, with customizable filtering features specific to each end user."(Id. at 1350). Similarly, the claims of the instant application recite a specific configuration in which event data is obtained from multiple event streams "based on [a] condition" and "based on a time the condition is detected," and then used to "generate...an event data structure" related to that detected condition.
The above is the extent to which Applicant explains how the instant claims are analogous to BASCOM. Applicant asserts "similarly," but Applicant does not offer for the record how the instant claims are similar to BASCOM's. That is, Applicant does not explain how the instant claims are analogous to BASCOM. Accordingly, the putative analogy is not persuasive.
In section III., Applicant argues that the claims do not recite an abstract idea. Pp. 10-11. The Office respectfully disagrees. The claims recite outputting a likelihood of fraud. As per the specification, the fraud likelihood determinations performed by the claims are in the context of financial payment transactions, see, e.g., 0002, 0005, 0022-0024, 0031, 0034-0037, 0041, 0044, 0046-0047, 0081-0082, Fig. 1. As such, and in particular as per the bolded portions of claim 1 in the rejection hereinbelow, the claims are directed to an abstract idea, falling under "certain methods of organizing human activity," in particular, "fundamental economic practices or principles" and/or "commercial or legal interactions."
Regarding the rejections under 35 U.S.C. 103
In view of the amendments, the rejections are overcome.
Subject Matter Distinguishable From Prior Art
The cited prior art of record, either alone or in combination, fails to expressly teach or suggest the features found in independent claims 1, 18 and 20.
Patel (US-20220101192-A1) teaches a fraud detection system using machine learning, including generating input feature vectors based on transaction data describing a requested transaction, generating an input sequence vector based on sequence data describing a sequence of transactions preceding the requested transaction, determining output values based on the input feature vectors and input sequence vector, determining a cumulative probability value based on the output values, and determining whether the requested transaction is fraudulent based on the cumulative probability value, and further including different machine learning models for processing the input feature vectors and the input sequence vector, respectively, and receiving a request to determine whether a transaction is fraudulent, and requesting additional data upon determining that the received transaction data is insufficient to determine whether the requested transaction is fraudulent.
Branco (US-20220245426-A1) teaches receiving a stream of event data, comprising transaction data, including contextual data pertaining to entities associated with the transaction, e.g., card, device, account, and merchant; generating multiple different embeddings (embedding output sets, i.e., feature vectors) from different sets of the event data (transaction data/contextual data); inputting the different feature vectors (e.g., representing respective datasets for respective different entities) into respective machine learning models; outputting from the models a probability of fraud; and flagging transactions identified as potentially fraudulent.
Wong (US-20230118240-A1) teaches a machine learning system for determining the presence of an anomaly in data, wherein the training set is divided into two feature sets, a first feature set representing observable features and a second feature set representing context features, the observable features being derived from a function of at least transaction data for a current transaction, and the context features being derived from one or more of a function of historical transaction data that excludes the current transaction and retrieved data relating to the uniquely identifiable entity for the current transaction, and further teaching columnar data structures for the indicated types of transaction data (Fig. 11A), and illustrating the formation of a feature vector from the transaction data (Fig. 11B).
Van Arkel (US-10372878-B1) teaches, inter alia, a streaming data reduction platform to normalize and filter information associated with a healthcare claim to reduce the information associated with the healthcare claim to a size that may be processed in real time or near real time.
Barrett (AU-645251-B2) teaches a device driver comprising data structuring means coupled to an external device for receiving first data structure signals and producing second data structure signals for transmission to an application.
Balasubramanian (US-20150317572-A1) teaches on-demand enrichment of business data, wherein a client computer generates a data enrichment command including context information, an enrichment data service uses the information in the command to determine the type of enrichment data that may be useful and the sources from which to request the enrichment data, and the enrichment data is then retrieved from the enrichment data sources and sent to the requesting client computing device to be displayed to the user in a manner that enriches the original reporting data, wherein the context information can include time/date.
Priess (US-20150026027-A1) teaches fraud detection and analysis including receiving event data and risk data from data sources, where the event data comprises data of actions taken in a target account during electronic access of the account, and the risk data comprises data of actions taken in a accounts different from the target account; using the event data and the risk data, dynamically generating an account model that corresponds to the target account, and using the account model to generate a risk score representing a relative likelihood an action taken in the target account is fraud, and further including processing and scoring of multiple event streams from different channels and potentially different arrival timing (e.g. batch vs. real-time).
Hunter (US-20160253672-A1) teaches detecting fraudulent transactions, including receiving first features for an entity in transaction data, receiving second features for a benchmark set, the second features corresponding with the first features, and determining an outlier value of the entity based on a Mahalanobis distance from the first features to a benchmark value representing an average for the second features.
Lee (US-20200394455-A1) teaches, inter alia, retrieving a plurality of data sources associated with a user and a shared vehicle associated with the user; extracting a plurality of features from the plurality of data sources; building a feature vector representing the plurality of features; and inputting the feature vector into a machine learner to generate a classification of the user.
Hendry (US-11315119-B1) teaches fraud detection in which financial transaction processing systems publish transaction events to an event stream, a transaction service listening to the event stream detects new transaction events, and takes action to enrich transaction data, and detect fraudulent transactions based on the enriched transaction data before the transactions have been posted to the financial account, and further including the use of multiple different event streams for different transaction types.
Gospodinov (US-20190080328-A1) teaches real time data monitoring and alerting including detecting anomalous transactions and behavioral patterns rapidly, based on real-time data inputs from a variety of sources (e.g., in-store purchase information and online purchase information) and on analytical tools applied to historical transaction and behavioral data.
Drapeau (US-11704673-B1) teaches receiving a transaction having transaction attributes and transaction data, determining an identity associated with the user, the identity associated with additional transaction attributes not received with the transaction (e.g., additional card numbers, etc.), accessing a feature set associated with the initial transaction attributes ant the additional transaction attributes, and performing a machine learning model analysis of the feature set and transaction data to determine a likelihood that the transaction is fraudulent.
Buell (US-20250124445-A1) teaches continuous event monitoring, identification of risk signals, and acceleration of fraud risk analysis, including obtaining transaction data; generating a combined standardized transaction data structure based on analysis of the transaction data, including extracting details of the transaction data and adding to the extracted details of the transaction data one or more supplemental data elements obtained from the transaction source; obtaining event data from an event hub, the event data including: the transaction data, business events published to the event hub from an event source by an event emitter, previously generated composite risk signals, and previously generated detection events generated by a decision engine; generating various risk signals; and analyzing the risk signals by a machine learning model and a fraud application to determine a fraud transaction probability score.
Mohankumar (US-20250173726-A1) teaches early fraud detection in deferred transaction services, including processing a plurality of user datasets to generate a set of features for each user dataset, each set of features representative of a particular user; generating a plurality of embeddings sets, each embedding set being representative of a respective set of features; generating a plurality of synthetic user datasets; combining the plurality of embeddings sets and the plurality of synthetic user datasets to generate a training dataset, the training dataset comprising a plurality of user profiles; training the machine learning model based on the generated training dataset; and determining, via the machine learning model and in response to receiving a new user profile, a determination of whether the new user profile is real or synthetic.
Duke (US-20160283715-A1) teaches detecting fraud based on using different sources of data, including user's past activity and user's use of different services, see Figs. 13-16.
Malhotra (US-20200081815-A1) teaches real-time event analysis utilizing relevance and sequencing, wherein for a given user, processors detect a series of activities performed by the user using one or more user interface components; recognize the series of activities as a sequence of events, each event of the sequence corresponding to one more activities of the series; detect a current user activity; in connection with the current user activity, select a relevant portion of the sequence of events; and determine at least one of a user intent or interest, based on an analysis of the relevant portion of the sequence of events.
However, in particular, the cited prior art of record, either alone or in combination, fails to expressly teach or suggest all of the features in independent claims 1, 11 or 20 and more specifically the limitations of: determine that the event stream is associated with a second system; identify a second plurality of event streams associated with the second system; obtain, based on a time the condition is detected, a first set of events from the first plurality of event streams and a second set of events from the second plurality of event streams associated with the second system; generate, based on the condition, an event data structure comprising the first set of events and the second set of events; and convert the event data structure into at least two feature vectors corresponding to the first system and the second system for one or more machine-learning models, in combination with the other claim limitations.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-20 are directed to a system, method, or article of manufacture (non-transitory computer readable medium), which are/is one of the statutory categories of invention. (Step 1: YES)
Claims 1, 11 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system, method, and non-transitory computer readable medium for determining a likelihood of fraud.
For claims 1, 11 and 20 (claim 1 being deemed representative), the limitations (indicated below in bold) of:
one or more processors coupled to non-transitory memory, the one or more processors configured to:
detect a condition associated with an event stream of a first plurality of event streams associated with a first system;
determine that the event stream is associated with a second system;
identify a second plurality of event streams associated with the second system;
obtain, based on a time the condition is detected, a first set of events from the first plurality of event streams and a second set of events from the second plurality of event streams associated with the second system;
generate, based on the condition, an event data structure comprising the first set of events and the second set of events;
convert the event data structure into at least two feature vectors corresponding to the first system and the second system for one or more machine-learning models; and
execute the one or more machine-learning models using the at least two feature vectors as input and outputting a likelihood of fraud for the first system or the second system.
as drafted, constitute a process that, under the broadest reasonable interpretation, covers "certain methods of organizing human activity," specifically, "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use. The Examiner notes that "fundamental economic practices" or "fundamental economic principles" describe concepts relating to the economy and commerce, including hedging, insurance, and mitigating risks, and "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. MPEP 2106.04(a)(2)II.A.,B. If a claim limitation, under its broadest reasonable interpretation, covers "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use, then it falls within the "certain methods of organizing human activity" grouping of abstract ideas. Accordingly, claims 1, 11 and 20 recite an abstract idea. (Step 2A - Prong 1: YES. The claims recite an abstract idea.)
This judicial exception is not integrated into a practical application. Claims 1, 11 and 20 recite the additional elements of one or more processors coupled to non-transitory memory (the foregoing recited in claims 1 and 11), one or more machine-learning models (the foregoing recited in claims 1, 11 and 20), and a non-transitory computer readable medium with instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform operations (the foregoing recited in claim 20), that implement the abstract idea. These additional elements are not described by the applicant and they are recited at a high level of generality (i.e., one or more generic computer elements performing generic computer functions, or a particular technological environment or field of use), such that they amount to no more than mere instructions to apply the exception using generic computer elements (one or more processors coupled to non-transitory memory, one or more machine-learning models, and a non-transitory computer readable medium with instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform operations) or such that they amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (one or more machine-learning models). Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2A - prong 2: NO. The additional elements do not integrate the abstract idea into a practical application.)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of one or more processors coupled to non-transitory memory (the foregoing recited in claims 1 and 11), one or more machine-learning models (the foregoing recited in claims 1, 11 and 20), and a non-transitory computer readable medium with instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform operations (the foregoing recited in claim 20), to perform the noted steps amount to no more than mere instructions to apply the exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use. Mere instructions to apply an exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, claims 1, 11 and 20 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.)
Dependent claims 2-10 and 12-19 are similarly rejected because they further define/narrow the abstract idea of independent claims 1, 11 and 20 as discussed above, and/or do not integrate the abstract idea into a practical application or provide an inventive concept such as would render the claims eligible, whether each is considered individually or as an ordered combination.
As for further defining/narrowing the abstract idea:
Dependent claims 2 and 12 merely further describe receiving events based on timestamps.
Dependent claims 3 and 13 merely further describe generating events based on filtering.
Dependent claims 4 and 14 merely further describe determining that feature vectors are to be generated and generating them.
Dependent claim 5 merely further describes the content of the event data structure.
Dependent claims 6 and 15 merely further describe the content of the condition.
Dependent claims 7 and 16 merely further describe generating a flag.
Dependent claims 8 and 17 merely further describe generating an output using an input.
Dependent claims 9 and 18 merely further describe retrieving events and converting data.
Dependent claims 10 and 19 merely further describe the event streams.
As for additional elements:
Dependent claims 2-4, 9, 12-14 and 18 recite "the one or more processors." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claim 7 recites "the one or more machine-learning models." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or alternatively such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claim 8 recites "execute a first machine-learning model" and "execute a second machine-learning model." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or alternatively such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claim 16 recites "the one or more processors" and "the one or more machine-learning models." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element ("the one or more processors" and "the one or more machine-learning models") or such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use ("the one or more machine-learning models"). Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claim 17 recites "the one or more processors," "executing … a first machine-learning model" and "executing … a second machine-learning model." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element ("the one or more processors," "executing … a first machine-learning model" and "executing … a second machine-learning model") or such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use ("executing … a first machine-learning model" and "executing … a second machine-learning model"). Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Claims 5, 6, 10, 15 and 19 do not recite any additional elements, and accordingly, for the reasons provided above with respect to the independent claims, are not patent eligible.
Therefore, dependent claims 2-10 and 12-19 are not patent eligible.
Conclusion
The prior art made of record and not relied upon, as set forth in the accompanying Notice of References Cited (PTO-892), is considered pertinent to applicant's disclosure. Description of the cited references is provided above ("Subject Matter Distinguishable From Prior Art").
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS W PINSKY whose telephone number is (571)272-4131. The examiner can normally be reached on 8:30 am - 5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DOUGLAS W. PINSKY/
Examiner, Art Unit 3626
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626