Prosecution Insights
Last updated: April 19, 2026
Application No. 18/075,445

COMPUTERIZED-METHOD AND COMPUTERIZED-SYSTEM FOR GENERATING A CLASSIFICATION MACHINE LEARNING MODEL FOR IMPLEMENTATION WITH NO TRAINING REQUIREMENT

Final Rejection §101§103
Filed
Dec 06, 2022
Examiner
HASBROUCK, MERRITT J
Art Unit
3695
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Actimize Ltd.
OA Round
4 (Final)
11%
Grant Probability
At Risk
5-6
OA Rounds
3y 10m
To Grant
19%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
15 granted / 140 resolved
-41.3% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
45 currently pending
Career history
185
Total Applications
across all art units

Statute-Specific Performance

§101
45.4%
+5.4% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant filed a response dated October 10, 2025. Claims 1-4 and 6-8 are currently pending in the application. Priority Application 18/075,445 was filed on December 6, 2022. Examiner Request The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. § 112(a) or § 112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance. Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 and 6-8 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. (MPEP 2106). The claims are directed to a method which is one of the statutory categories of invention (Step 1: YES). The recitation of the claimed invention is analyzed as follows, in which the abstract elements are boldfaced. Claim 1 recites the limitations of: Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising: building the fraud detection ML model by using different isolated datasets from different environments without sharing data of the isolated datasets in the different environments: (i) identifying one or more tenants of a cloud service provider for financial institutions by a base activity; (ii) retrieving a set of features of transactions from a database of each identified one or more tenants to yield one or more sets of features of transactions; (iii) detecting one or more common features in the yielded one or more sets of features of transactions; (iv) using an object storage service in each tenant's environment to retrieve a dataset having the detected one or more common features; and (v) for each identified tenant, continuously training the fraud detection ML model in the tenant's environment on the retrieved dataset having the detected one or more common features to classify transactions on the retrieved dataset corresponding to the tenant from the one or more tenants, wherein the fraud detection ML model continues training after each retrieved dataset; deploying the trained fraud detection ML model in a new target tenant system to classify transactions, wherein the new target tenant system has no training dataset and no feasible training thereon. The claim as a whole recites a method that, under its broadest reasonable interpretation, covers collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data. This is a fundamental economic practice of a financial transaction; a commercial interaction, such as for business relations; and managing personal behavior or relationships or interactions between people, which are certain methods of organizing human activity. Furthermore, the claims recite a fraud detection Machine Learning (ML) model or eXtreme Gradient Boosting (XGB) algorithm. This is a mathematical calculation. Thus, the claims recite an abstract idea. (Step 2A, prong 1: YES). Moreover, the judicial exception is not integrated into a practical application. Other than reciting a “Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising:”, “database”, “object storage service”, “cloud service provider”, and “target tenant system”, to perform the steps of “building”, “detecting”, “training”, and “classifying”, nothing in the claim elements preclude the steps from practically being a certain method for organizing human activity or mathematical calculation. The claim as a whole does not integrate the exception into a practical application. The claim merely describes how to generally “apply” the concept of covers collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data in a computer environment. The additional computer elements recited in the claim limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception utilizing generic computer components. For example, the Specification at [0028] discloses “operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium (e.g., a memory) that may store instructions to perform operations and/or processes.” Furthermore, the Specification at [0032] discloses “Artificial Intelligence (AI)-based system is a computer system that is able to perform tasks that ordinarily require human intelligence. Many of these Al systems are powered by rules-based Machine Learning (ML) models and some of them are powered by deep learning” and [0082] “According to some embodiments of the present disclosure, system100 may generate a classification ML model, in a cloud-based environment, by using different isolated datasets from different environments to build and train the classification ML model120”, [0083] “According to some embodiments of the present disclosure, a model, such as classification ML model120, may be trained continuously across different datasets from different tenants using ML models. For example, XGBoost (XGB) algorithm provides a mechanism to train the ML model continuously.” Thus, the specification supports that general purpose computers or computer components are utilized to implement the steps of the abstract idea. Merely implementing the abstract idea on a generic computer is not a practical application of the abstract idea. The claim as a whole, in viewing the additional elements both individually and in combination, does not integrate the judicial exception into a practical application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. (Step 2A prong two: No) The claim does not include additional elements, when considered both individually and as an ordered combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using “Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising:”, “database”, “object storage service”, “cloud service provider”, and “target tenant system”, to perform the steps of “building”, “detecting”, “training”, and “classifying”, amounts to no more than mere instructions to apply the exception using generic computer component. The claim merely describes how to generally “apply” the concept of collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Such additional elements are determined to not contain an inventive concept according to MPEP 2106.05(f). It should be noted that (1) the “recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not provide significantly more because this type of recitation is equivalent to the words “apply it”, and (2) “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice, commercial interaction, or managing personal behavior or relationships or interactions between people or a mathematical calculation) does not integrate a judicial exception into a practical application or provide significantly more”. Dependent claims 2-4 and 6-8 merely limit the abstract idea and do not recite any additional elements beyond the cited abstract idea, thus, they do not amount to significantly more. The dependent claims are abstract for the reasons presented above because there are no additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Thus, the dependent claims are directed to an abstract idea. (Step 2B: No) Therefore, claims 1-4 and 6-8 are not patent-eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Cooper, U.S. Patent Application Publication Number 2024/0386119; in view of Gold, U.S. Patent Application Publication Number 2019/0370833. As per claim 1, Cooper explicitly teaches: Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising: building the fraud detection ML model by using different isolated datasets from different environments without sharing data of the isolated datasets in the different environments: (i) identifying one or more tenants of a cloud service provider for financial institutions by a base activity; (Cooper US20240386119 at paras. 62-65, 185-187, 197-199) ("[0063] Certain embodiments described herein allow machine learning models to be tailored to be specific to patterns of behaviour between certain pairs of entities (such as account holders) and categories (such as merchants, transaction amounts, times of day, and others). For example, the machine learning models may model entity-category-pair specific patterns of behaviour. The machine learning systems described herein are able to provide dynamically updating machine learning models despite large transaction flows and/or despite the need for segregation of different data sources. [0064] As outlined previously, embodiments of the present invention may be applied to a wide variety of digital transactions, including, but not limited to, card payments, so-called “wire” transfers, peer-to-peer payments, Bankers' Automated Clearing System (BACS) payments, and Automated Clearing House (ACH) payments. The output of the machine learning system may be used to prevent a wide variety of fraudulent and criminal behaviour such as card fraud, application fraud, payment fraud, merchant fraud, gaming fraud and money laundering." "[0186] Examples described herein concern a different type of multi-tenant solution. A “tenant” may be a customer of an administrator of a platform. An administrator may be referred to as a “landlord”. The term “multi-tenant” is used herein to mean at least two tenants. Example multi-tenant solutions described herein may service every tenant of an administrator securely, with data segregation between different tenants." "[0198] The example platforms described herein provide a high level of fine-grained control, allowing models to be segregated by Solution, tenant group and/or tenant, or to have access to the entire system as a consortium model.") (ii) retrieving a set of features of transactions from a database of each identified one or more tenants to yield one or more sets of features of transactions; (Cooper US20240386119 at paras. 62-65, 185-187, 197-199) ("[0062] Certain exemplary embodiments are described herein which relate to a machine learning system for use in transaction processing. In certain embodiments, a machine learning system is applied in real-time, high-volume transaction processing pipelines to provide an indication of whether a transaction or entity matches previously observed and/or predicted patterns of activity or actions, e.g. an indication of whether a transaction or entity is “normal” or “anomalous”. The term “behavioural” is used herein to refer to this pattern of activity or actions. The indication may comprise a scalar value normalised within a predefined range (e.g., 0 to 1) that is then useable to prevent fraud and other misuse of payment systems. The machine learning systems may apply machine learning models that are updated as more transaction data is obtained, e.g. that are constantly trained based on new data, so as to reduce false positives and maintain accuracy of the output metric. The present examples may be particularly useful for preventing fraud") (iii) detecting one or more common features in the yielded one or more sets of features of transactions; (Cooper US20240386119 at paras. 62-65, 185-187, 197-199) ("[0062] Certain exemplary embodiments are described herein which relate to a machine learning system for use in transaction processing. In certain embodiments, a machine learning system is applied in real-time, high-volume transaction processing pipelines to provide an indication of whether a transaction or entity matches previously observed and/or predicted patterns of activity or actions, e.g. an indication of whether a transaction or entity is “normal” or “anomalous”. The term “behavioural” is used herein to refer to this pattern of activity or actions. The indication may comprise a scalar value normalised within a predefined range (e.g., 0 to 1) that is then useable to prevent fraud and other misuse of payment systems. The machine learning systems may apply machine learning models that are updated as more transaction data is obtained, e.g. that are constantly trained based on new data, so as to reduce false positives and maintain accuracy of the output metric. The present examples may be particularly useful for preventing fraud") (iv) using an object storage service in each tenant's environment to retrieve a dataset having the detected one or more common features; and (Cooper US20240386119 at paras. 168-170) ("[0169] FIG. 4 shows one example 400 of a machine learning system 402 that may be used to process transaction data. Machine learning system 402 may implement one or more of machine learning systems 160 and 210. The machine learning system 402 receives input data 410. The form of the input data 410 may depend on which machine learning model is being applied by the machine learning system 402. In a case where the machine learning system 402 is configured to perform fraud or anomaly detection in relation to a transaction, e.g. a transaction in progress as described above, the input data 410 may comprise transaction data such as 330 (i.e., data forming part of a data package for the transaction) as well as data derived from historical transaction data (such as 300 in FIG. 3A) and/or data derived from ancillary data (such as 148 in FIGS. 1A to 1C or 242 in FIGS. 2A and 2B). The ancillary data may comprise secondary data linked to one or more entities identified in the primary data associated with the transaction. For example, if transaction data for a transaction in progress identifies a user, merchant and one or more banks associated with the transaction (such as an issuing bank for the user and a merchant bank), such as via unique identifiers present in the transaction data, then the ancillary data may comprise data relating to these transaction entities. The ancillary data may also comprise data derived from records of activity, such as interaction logs and/or authentication records. In one case, the ancillary data is stored in one or more static data records and is retrieved from these records based on the received transaction data. Additionally, or alternatively, the ancillary data may comprise machine learning model parameters that are retrieved based on the contents of the transaction data. For example, machine learning models may have parameters that are specific to one or more of the user, merchant and issuing bank, and these parameters may be retrieved based on which of these is identified in the transaction data. For example, one or more of users, merchants, and issuing banks may have corresponding embeddings, which may comprise retrievable or mappable tensor representations for said entities. For example, each user or merchant may have a tensor representation (e.g., a floating-point vector of size 128-1024) that may either be retrieved from a database or other data storage or may be generated by an embedding layer, e.g. based on a user or merchant index." "[0186] Example multi-tenant solutions described herein may service every tenant of an administrator securely, with data segregation between different tenants.") (v) for each identified tenant, continuously training the fraud detection ML model in the tenant's environment on the retrieved dataset having the detected one or more common features to classify transactions on the retrieved dataset corresponding to the tenant from the one or more tenants, wherein the fraud detection ML model continues training after each retrieved dataset; (Cooper US20240386119 at paras. 62-65, 185-187, 232-236) ("[0062] Certain exemplary embodiments are described herein which relate to a machine learning system for use in transaction processing. In certain embodiments, a machine learning system is applied in real-time, high-volume transaction processing pipelines to provide an indication of whether a transaction or entity matches previously observed and/or predicted patterns of activity or actions, e.g. an indication of whether a transaction or entity is “normal” or “anomalous”. The term “behavioural” is used herein to refer to this pattern of activity or actions. The indication may comprise a scalar value normalised within a predefined range (e.g., 0 to 1) that is then useable to prevent fraud and other misuse of payment systems. The machine learning systems may apply machine learning models that are updated as more transaction data is obtained, e.g. that are constantly trained based on new data, so as to reduce false positives and maintain accuracy of the output metric. The present examples may be particularly useful for preventing fraud" "[0236] As such, the first and second tenant ML model data 610, 612 is again segregated in that the ML model 830 being applied in respect of the second tenant 608 has access to the second tenant ML model data 612 and does not have access to the first tenant ML model data 610. In this example, the ML model 830 can use the tenant group ML model data 614 and the second tenant ML model data 612 to perform real-time anomaly detection associated with the second tenant 608.") Cooper does not explicitly teach, however, Gold teaches: Initially, Examiner notes that Cooper teaches deploying the trained fraud detection ML model. (Cooper at paras. 87, 196, 199, 202). Furthermore, Gold teaches: deploying the trained [fraud detection] ML model in a new target tenant system to classify transactions, wherein the new target tenant system has no training dataset and no feasible training thereon. (Gold US20190370833 at paras. 43-46) ([0044] In some embodiments, the churn prediction model generation engine 208 functions to generate churn prediction models 256 for tenants with no tenant data 120 and/or minimal tenant data 120. For example, a tenant may be new to the multi-tenant system 102, and may not have acquired a threshold number of examples of churn (e.g., 100 examples) to accurately predict churn. In some embodiments, the churn prediction model generation engine 208 may generate and/or leverage other datasets, such as anonymized datasets based on the tenant data 120 of one or more similar tenants, plans, and/or the like, to generate churn prediction models 256. In some embodiments, template datasets may be used. For example, template datasets may represent typical providers (e.g., internet providers). These other datasets may be used to generate churn prediction models 256 until the tenant has acquired a threshold number of examples of churn and/or other required amount of information. For example, the new tenant may use the other datasets for three-months, and then transition to their own data.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cooper and Gold, because it allows for an improved method that can reduce computing resource requirements and/or reduce the amount of processing time to make predictions in a multi-tenant system. (Gold at Abstract and paras. 3-17). As per claim 2, Cooper does not explicitly teach, however, Gold teaches: Initially, Examiner notes that Cooper teaches training the trained fraud detection ML model. (Cooper at paras. 87, 196, 199, 202). Furthermore, Gold teaches: wherein when the new target tenant system has accumulated a preconfigured amount of historical data, training the [fraud detection] ML model on the historical data. (Gold US20190370833 at paras. 43-46) ([0044] In some embodiments, the churn prediction model generation engine 208 functions to generate churn prediction models 256 for tenants with no tenant data 120 and/or minimal tenant data 120. For example, a tenant may be new to the multi-tenant system 102, and may not have acquired a threshold number of examples of churn (e.g., 100 examples) to accurately predict churn. In some embodiments, the churn prediction model generation engine 208 may generate and/or leverage other datasets, such as anonymized datasets based on the tenant data 120 of one or more similar tenants, plans, and/or the like, to generate churn prediction models 256. In some embodiments, template datasets may be used. For example, template datasets may represent typical providers (e.g., internet providers). These other datasets may be used to generate churn prediction models 256 until the tenant has acquired a threshold number of examples of churn and/or other required amount of information. For example, the new tenant may use the other datasets for three-months, and then transition to their own data.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cooper and Gold, because it allows for an improved method that can reduce computing resource requirements and/or reduce the amount of processing time to make predictions in a multi-tenant system. (Gold at Abstract and paras. 3-17). As per claim 7, Cooper explicitly teaches: wherein the retrieved dataset having the detected one or more common features is of transactions from a preconfigured period. (Cooper US20240386119 at paras. 167-169) ("[0167] FIGS. 3A and 3B show examples of transaction data that may be processed by a machine learning system such as 160 or 210. FIG. 3A shows how transaction data may comprise a set of time-ordered records 300, where each record has a timestamp and comprises a plurality of transaction fields. In certain cases, transaction data may be grouped and/or filtered based on the timestamp. For example, FIG. 3A shows a partition of transaction data into current transaction data 310 that is associated with a current transaction and “older” or historical transaction data 320 that is within a predefined time range of the current transaction. The time range may be set as a hyperparameter of any machine learning system. Alternatively, the “older” or historical transaction data 320 may be set as a certain number of transactions. Mixtures of the two approaches are also possible.") As per claim 8, Cooper explicitly teaches: wherein the retrieved dataset having the detected one or more common features is a labeled dataset. (Cooper US20240386119 at paras. 167-169) ("[0168] FIG. 3B shows how transaction data 330 for a particular transaction may be stored in numeric form for processing by one or more machine learning models. For example, in FIG. 3B, transaction data has at least fields: transaction amount, timestamp (e.g., as a Unix epoch), transaction type (e.g., card payment or direct debit), product description or identifier (i.e., relating to items being purchased), merchant identifier, issuing bank identifier, a set of characters (e.g., Unicode characters within a field of predefined character length), country identifier etc. It should be noted that a wide variety of data types and formats may be received and pre-processed into appropriate numerical representations. In certain cases, originating transaction data, such as that generated by a client device and sent to merchant server 130 is pre-processed to convert alphanumeric data types to numeric data types for the application of the one or more machine learning models. Other fields present in the transaction data can include, but are not limited to, an account number (e.g., a credit card number), a location of where the transaction is occurring, and a manner (e.g., in person, over the phone, on a website) in which the transaction is executed." "[0169] FIG. 4 shows one example 400 of a machine learning system 402 that may be used to process transaction data. Machine learning system 402 may implement one or more of machine learning systems 160 and 210. The machine learning system 402 receives input data 410. The form of the input data 410 may depend on which machine learning model is being applied by the machine learning system 402. In a case where the machine learning system 402 is configured to perform fraud or anomaly detection in relation to a transaction, e.g. a transaction in progress as described above, the input data 410 may comprise transaction data such as 330 (i.e., data forming part of a data package for the transaction) as well as data derived from historical transaction data (such as 300 in FIG. 3A) and/or data derived from ancillary data (such as 148 in FIGS. 1A to 1C or 242 in FIGS. 2A and 2B).") Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cooper, U.S. Patent Application Publication Number 2024/0386119; in view of Gold, U.S. Patent Application Publication Number 2019/0370833; in view of Conort, U.S. Patent Application Publication Number 2022/0076164. As per claim 3, Cooper and Gold do not explicitly teach, however, Conort teaches: Initially, Examiner notes that Cooper teaches detecting of one or more common features . . . by the fraud detection ML model. (Cooper at paras. 87, 196, 199, 202). Furthermore, Conort teaches: wherein the detecting of one or more common features comprising: (i) running feature engineering and feature selection pipeline on each tenant dataset to yield features scores, wherein a feature score indicates a level of relevance of a feature to classification of objects by the fraud detection ML model; and (ii) identifying a preconfigured number of high scores features across each one or more tenants. (Conort US20220076164 at paras. 133-136) ([0134] Each feature can then be given a quality score based on the resulting feature impact values. In some examples, the quality score for each feature can be equal to the feature impact value for that feature. Alternatively or additionally, the quality score can be further adjusted based on a correlation between the feature and one or more other features. When a correlation between two features exceeds a predefined threshold, for example, the quality score for one or both of the features can be discounted or reduced (e.g., by 10%, 25%, 50%, or more). [0135] Reducing quality scores in this manner can prevent multiple correlated features from receiving high quality scores and/or from being retained in the final set of features. In general, features having low quality scores can be removed and features having high quality scores can be retained. Features can be retained, for example, when a sum of the quality scores for the features exceeds a high proportion of a total of the quality scores (e.g. 99%). For example, features (e.g., derived features) having the highest quality scores and that contribute to some threshold amount of the total of the quality scores (e.g., 80%, 90%, 95%, 97%, or 99%) can be retained, while other features with lower quality scores can be removed.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gold and Conort, because it allows for an improved model training and deployment system that can automatically generate and detect features from multiple datasets that include historical data. (Conort at Abstract and paras. 2-7). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cooper, U.S. Patent Application Publication Number 2024/0386119; in view of Gold, U.S. Patent Application Publication Number 2019/0370833; in view of Conort, U.S. Patent Application Publication Number 2022/0076164; in view of Zheng, U.S. Patent Application Publication Number 2022/0351087. As per claim 4, Cooper, Gold, and Conort do not explicitly teach, however, Zheng explicitly teaches: wherein the features scores are yielded by an eXtreme Gradient Boosting (XGB) algorithm. (Zheng US20220351087 at paras. 38-40) ([0055] At block 802, the pre-processing system 100 receives a dataset including a plurality of values for training a machine learning model, where each of the plurality of values is associated with one of a plurality of features. At block 804, the pre-processing system 100 determines, for each of the plurality of features, one or more characteristics of the values associated with the feature. For example, the one or more characteristics may include a data type, a number of non-null values, a number of distinct values, a predominant value, a number of instances of the predominant value, a standard deviation, a mean value, a minimum value, a maximum value, one or more percentile thresholds, a ROC AUC score, or an XGBoost score. ) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cooper, Gold, Conort, and Zheng, because it allows for improvements to the performance of machine learning systems, and more specifically to reducing the cost of training a machine learning model as well as reducing the size and complexity, while also improving the accuracy, of the resulting model. (Zheng at Abstract and paras. 2-8, 21-23). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cooper, U.S. Patent Application Publication Number 2024/0386119; in view of Gold, U.S. Patent Application Publication Number 2019/0370833; in view of Zheng, U.S. Patent Application Publication Number 2022/0351087. As per claim 6, Cooper and Gold do not explicitly teach, however, Zheng explicitly teaches: Initially, Examiner notes that Cooper teaches training of the fraud detection ML model. (Cooper at paras. 87, 196, 199, 202). Furthermore, Zheng teaches: wherein the training of the [fraud detection] ML model is performed by operating an eXtreme Gradient Boosting (XGB) algorithm. (Zheng US20220351087 at paras. 54-56) ([0055] At block 802, the pre-processing system 100 receives a dataset including a plurality of values for training a machine learning model, where each of the plurality of values is associated with one of a plurality of features. At block 804, the pre-processing system 100 determines, for each of the plurality of features, one or more characteristics of the values associated with the feature. For example, the one or more characteristics may include a data type, a number of non-null values, a number of distinct values, a predominant value, a number of instances of the predominant value, a standard deviation, a mean value, a minimum value, a maximum value, one or more percentile thresholds, a ROC AUC score, or an XGBoost score. ) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cooper, Gold, and Zheng, because it allows for improvements to the performance of machine learning systems, and more specifically to reducing the cost of training a machine learning model as well as reducing the size and complexity, while also improving the accuracy, of the resulting model. (Zheng at Abstract and paras. 2-8, 21-23). Response to Arguments Applicant’s arguments filed on October 10, 2025 have been fully considered but are not persuasive for the following reasons: With respect to Applicant’s arguments as to the § 101 rejections for now pending claims 1-4 and 6-8, Examiner notes the following: Applicant argues that the claims are not directed to an abstract idea. Examiner disagrees, however, and notes that the claim as a whole recites a method that, under its broadest reasonable interpretation, covers collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data. This is a fundamental economic practice of a financial transaction; a commercial interaction, such as for business relations; and managing personal behavior or relationships or interactions between people, which are certain methods of organizing human activity. Furthermore, the claims recite a fraud detection Machine Learning (ML) model or eXtreme Gradient Boosting (XGB) algorithm. This is a mathematical calculation. Thus, the claims recite an abstract idea. Applicant next argues that the claims are integrated into a practical application. Examiner disagrees, however, and notes that the additional elements of the computer system - “Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising:”, “database”, “object storage service”, “cloud service provider”, and “target tenant system”, to perform the steps of “building”, “detecting”, “training”, and “classifying”, in all steps is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. The claims at issue covers collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data. The claims invoke the “Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising:”, “database”, “object storage service”, “cloud service provider”, and “target tenant system”, to perform the steps of “building”, “detecting”, “training”, and “classifying” merely as tools to execute the abstract idea. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a certain method of organizing human activity or mental process or mathematical calculation) does not integrate a judicial exception into a practical application. (MPEP 2106.05 (f)) Additionally, Applicant argues that “There are situations where a system that receives a service from a cloud service provider, don't have enough historical data that can be used as a dataset for building and training an adjusted Machine Learning (ML) model. For example, an existing Financial Institution (FI) client of a software supplier, that would like to add a new product or a new feature. Therefore, it may take a long period of time to create an adjusted ML model on the client data, because it takes 6-9 months for client data to get mature. Accordingly, there is a need for a technical solution for generating a day one out of the box classification ML model that can be used by new tenants of a cloud service provider, that the ML model was never trained on a new tenant related dataset before.” Examiner notes that the stated problems of data sufficiency and the corresponding time delay in creating an adjusted ML model is not a technical problem, and the claimed solution is not a technical solution. In the claim, the solution of collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data and the use of a machine learning model for fraud determination is part of the abstract idea, as it is merely involves data collection, manipulation, and analysis and the process could be completed manually or mentally or by pen and paper. Finally, the Applicant argues that the claims are directed to significantly more than the abstract idea. Examiner disagrees, however, and notes that, as explained above in the instant rejection under 35 U.S.C. § 101, that the additional elements do not amount to an inventive concept. The additional elements of the computer system - a “Computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment, said computerized-method comprising:”, “database”, “object storage service”, “cloud service provider”, and “target tenant system”, are merely generic computer components performing their well-known basic functions of collecting, analyzing, and transmitting data to facilitate fraud determination as to financial institution transaction data and the use of a machine learning model for fraud determination. Per the specification, the recited computer elements and machine learning steps and model are described only at a high level of generality, (see Spec. at paras. [0028], [0032], [0082]–[0083]). In view of the specification, the application of the computer elements and machine learning is merely being applied to the abstract idea. The other limitations which are simply supporting the abstract idea correspond to insignificant extra-solution activity which do not transform the abstract idea into a patent eligible subject matter. Also, the functionality here is already present in the recited hardware, which is merely routine and conventional. Collecting, analyzing, and transmitting data is routine and conventional. There is no technological problem or solution identified. This is merely a business solution to transfer data between devices. (MPEP 2106.05 (f)) With respect to Applicant’s arguments as to the § 103 rejections for now pending claims 1-4 and 6-8, Examiner notes the following: Applicant argues that “Cooper teaches machine learning systems that address patterns of behavior for fraud prevention in multi-tenant environments with data segregation, [but] its primary focus is on access control and data segregation within a hierarchical structure.” Examiner notes that Cooper expressly teaches applying machine learning models to transaction data for fraud detection and continuously updating such models as additional transaction data is obtained. (See Cooper at paras. 62-65, 168-170, 185-187, 197-199). Although Cooper discloses access control mechanisms, Cooper also explicitly discloses training, updating, and applying machine learning models in transaction processing functionality. Applicant argues that Cooper’s “objective is permission management that is who may read a particular model and not how the model is built.” Examiner notes that Cooper explicitly discloses how the model is built, not merely who may read a model. Cooper explicitly discloses that machine learning models are applied in real-time, high-volume transaction processing pipelines, and the machine learning systems may apply machine learning models that are updated as more transaction data is obtained, e.g. that are constantly trained based on new data, so as to reduce false positives and maintain accuracy of the output metric. (See Cooper at paras. 62-65, 168-170, 185-187, 197-199). Applicant argues that “the current application provides a computerized-method for generating a fraud detection Machine Learning (ML) model, in a cloud-based environment that has ‘no training requirement’ in a new target tenant system, achieved by leveraging and combining insights from isolated datasets without explicit data sharing across different environments.” Examiner notes that Applicant argues limitations that are not claimed. The claims do not recite “no training requirement”, “leveraging and combining insights”, or “collective-intelligence model that learns”. Similarly, the claims do not recite “1. Sequential, in-tenant training that moves from tenant A to B without data transfer; 2. Detection of common features before each training hop; and 3. Cold-start deployment in a tenant site that possesses no training data.” Applicant’s argument relies on importing limitations from the specification into the claims. It is improper to import claim imitations from the specification. (See MPEP 2111). Finally Applicant argues that Cooper and Gold fail to teach the limitation of “deploying the trained fraud detection ML model in a new target tenant system to classify transactions, wherein the new target tenant system has no training dataset and no feasible training thereon.” In particular, Applicant argues that “Cooper presumes each tenant already possesses or is granted model parameters; it offers no remedy for the zero-data tenant problem which is solved by current application.” And “Gold does not teach or suggest a continuous learning approach, with training taking place in one shot using one environment and one dataset. Gold teaches a process where a model is generated and then applied, but it lacks any indication of the model incrementally learning from new/separate datasets, which is a key aspect of the continuous learning employed in current application.” Examiner notes that Applicant argues the references individually rather than in combination. The combination of Cooper and Gold teaches the limitation of deploying the trained fraud detection ML model in a new target tenant system to classify transactions, wherein the new target tenant system has no training dataset and no feasible training thereon. As explained, Cooper teaches applying machine learning models to transaction data for fraud detection and continuously updating such models as additional transaction data is obtained. (See Cooper at paras. 62-65, 168-170, 185-187, 197-199). Gold explicitly teaches generating prediction models “for tenants with no tenant data 120 and/or minimal tenant data”, including leveraging anonymized datasets, template datasets, etc. until sufficient data is acquired. (See Gold US20190370833 at paras. 43-46). Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cooper and Gold, because it allows for an improved method that can reduce computing resource requirements and/or reduce the amount of processing time to make predictions in a multi-tenant system. (Gold at Abstract and paras. 3-17). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is available for review on Form PTO-892 Notice of References Cited. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT J HASBROUCK whose telephone number is (571)272-3109. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine Tran can be reached on 571-272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MERRITT J HASBROUCK/Examiner, Art Unit 3695 /CHRISTINE M Tran/Supervisory Patent Examiner, Art Unit 3695
Read full office action

Prosecution Timeline

Dec 06, 2022
Application Filed
Apr 05, 2024
Non-Final Rejection — §101, §103
Jun 24, 2024
Response Filed
Oct 01, 2024
Final Rejection — §101, §103
Nov 18, 2024
Interview Requested
Dec 10, 2024
Applicant Interview (Telephonic)
Dec 11, 2024
Examiner Interview Summary
Dec 12, 2024
Request for Continued Examination
Dec 13, 2024
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §101, §103
Oct 10, 2025
Response Filed
Dec 15, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12299690
Systems and methods for tracking, predicting, and mitigating advanced persistent threats in networks
2y 5m to grant Granted May 13, 2025
Patent 12141784
SYSTEM FOR WHEELCHAIR-BASED NEAR FIELD COMMUNICATION (NFC) PAYMENT EXTENSION AND STANDARD
2y 5m to grant Granted Nov 12, 2024
Patent 12112369
TRANSMITTING PROACTIVE NOTIFICATIONS BASED ON MACHINE LEARNING MODEL PREDICTIONS
2y 5m to grant Granted Oct 08, 2024
Patent 11887102
TEMPORARY VIRTUAL PAYMENT CARD
2y 5m to grant Granted Jan 30, 2024
Patent 11870857
USER ACCOUNT MIGRATION BETWEEN PLATFORMS
2y 5m to grant Granted Jan 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
11%
Grant Probability
19%
With Interview (+8.1%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month