Prosecution Insights
Last updated: April 19, 2026
Application No. 17/395,014

Federated Machine Learning Computer System Architecture

Final Rejection §101§103
Filed
Aug 05, 2021
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Paypal Inc.
OA Round
4 (Final)
11%
Grant Probability
At Risk
5-6
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 11/06/20254. Claims 1-7 and 15-27 are pending and have been examined. This action is Non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Response to Arguments Argument 1: The applicant argues that the claims are not directed to a judicial exception under Step 2A Prong 1 because they recite a computer-centric, distributed federated machine learning architecture that cannot be performed in the human mind and does not merely recite a mathematical concept. The applicant asserts that the claims require multiple computing systems, including a server computer system, a remote computer system, and a user device, operating across local area networks and wide area networks, where different portions of a federated machine learning model are executed at different locations using private user information that is intentionally restricted from transmission. According to the applicant, generating remote scores, device scores, and server scores using different model portions deployed across distinct network boundaries is not a mental process, as a human cannot practically perform distributed inference, enforce network-level data confinement, or coordinate score generation across multiple computing systems. The applicant further contends that the claims do not recite a mathematical formula in the abstract, but instead recite how machine learning models are partitioned, deployed, and executed in a specific computing environment, such that the claims are directed to a technological solution rooted in computer systems rather than an abstract idea. Response to Argument 1: The examiner has considered the argument set forth above but does not find it persuasive. While Applicant argues that the claims are not directed to a judicial exception because they recite a distributed federated machine learning architecture involving multiple computing systems, networks, and restricted data transmission, this argument addresses the implementation environment rather than the focus of the claims. As explained in the rejection, the claims are directed to analyzing information to generate scores and determining whether to grant a transaction request based on those scores. These activities constitute mental processes because they involve evaluation, comparison, and decision making, which can be performed in the human mind, even if carried out by a computer. The recitation of multiple computing systems, federated model portions, and LAN or WAN communication merely describes where the abstract idea is performed and does not change the nature of the idea itself. Additionally, the claims do not recite a specific technical improvement to machine learning technology, but instead use machine learning as a tool to perform abstract data analysis. Accordingly, the claims remain directed to an abstract idea under Step 2A Prong 1. Argument 2: The applicant argues that even if the claims were found to recite an abstract idea, they are integrated into a practical application under Step 2A Prong 2 and provide significantly more under Step 2B because they solve specific technical problems in federated authentication systems, namely user data privacy, network security, and scalable risk evaluation. The applicant asserts that the claimed architecture improves the functioning of computer systems by preventing sensitive private user information from being transmitted outside a local network, while still enabling centralized decision-making based on aggregated scores rather than raw data. According to the applicant, the use of separately trained and deployed model portions, coupled with LAN and WAN aware data handling, results in a system that improves security and privacy without degrading authentication accuracy, an improvement that is not achievable using conventional centralized or generic federated learning techniques. The applicant further analogizes the claims to computer network specific solutions recognized as patent eligible, arguing that the problem addressed arises only in networked computer systems and the claimed solution is necessarily rooted in computer technology rather than a business or mental process. Response to Argument 2: The examiner has considered the argument set forth above but does not find it persuasive. Applicant asserts that the claims are integrated into a practical application and provide significantly more because they address privacy, security, and scalability in federated authentication systems. However, the additional claim elements do not meaningfully limit the abstract idea or improve the functioning of a computer. As explained in the rejection, limitations such as receiving transaction information, receiving scores, restricting transmission of private user information, and specifying LAN or WAN communication are directed to data gathering, data transmission, and environmental constraints, which are insignificant extra-solution activities. Preventing transmission of private data does not transform the abstract process of evaluating information and making a determination into a practical application. Similarly, aggregating scores rather than raw data does not alter the abstract nature of the claimed decision-making process. The claims do not recite a specific technical solution to a computer-related problem, nor do they improve computer functionality itself. When considered as an ordered combination, the claim elements reflect well-understood, routine, and conventional activities in the field of machine learning and data processing and therefore do not provide significantly more under Step 2B. The rejection is maintained. Argument 3: The applicant argues that independent claim 1, and claims analogous to claim 1, are not rendered obvious by Yang alone or in view of Liu because the cited references fail to teach or suggest the claimed three-portion federated machine learning architecture in which a server computer system, a remote computer system, and a user device each execute different portions of a federated model with different data access permissions. The applicant contends that Yang is directed to collaborative or vertical federated learning frameworks in which multiple parties jointly train or infer using encrypted gradients or shared intermediate values, but does not disclose or suggest the claimed arrangement in which a server computer system generates a server score using a server-side model portion while separately receiving a remote score and a device score generated using different model portions deployed at different network locations. The applicant further argues that Yang does not disclose or suggest that the remote computer system and the user device are located within the same local area network while the server computer system is coupled via a wide area network, nor does Yang teach preventing transmission of private user information outside the local area network. According to the applicant, these architectural and data flow limitations, which are recited in claim 1 and carried through dependent claims such as claims 3 and 5, are not taught or suggested by Yang and are not addressed by Liu. Response to Argument 3: The examiner has considered the argument set forth above and acknowledges that, in light of the amendments and newly added claims, the obviousness mapping has been updated to more explicitly address the claimed distribution of model portions, network placement, and data access restrictions. However, despite these changes, the amended claims remain unpatentable over Yang in view of Yuan. As set forth in the current mapping, Yang teaches generating multiple scores for a transaction request using distributed model parameters held by different parties, including generating intermediate scores at non-server entities and aggregating those scores at a central server to determine whether to grant a transaction request. Yuan further teaches a hierarchical federated learning architecture in which user devices and edge-related components operate within a local area network, while a central server communicates over a wide area network, and in which private user data is intentionally not transmitted outside the local environment. The applicant’s characterization of Yang as limited to encrypted gradients or training-only collaboration is not persuasive, as Yang expressly discloses inference-time collaboration where different parties retain model portions and compute values that are transmitted to a central entity for evaluation. When combined with Yuan’s explicit LAN-WAN orchestration and privacy-preserving data locality teachings, the cited references collectively teach or suggest the claimed three-portion federated architecture, the LAN-based placement of the remote computer system and user device, the WAN coupling to the server, and the prevention of private user information transmission outside the LAN. Accordingly, the architectural and data flow limitations recited in claim 1 and carried through dependent claims such as claims 3 and 5 are taught or suggested by the combination, and the amendments do not overcome the rejection. Argument 4: The applicant argues that Liu does not remedy the deficiencies of Yang, and that the examiner’s rejection of claims 1, 3-5, 8, 11, and 12 relies on an improper interpretation of Liu and impermissible hindsight. The applicant asserts that Liu merely discloses distributing a global model to clients in a conventional federated learning framework and does not teach or suggest partitioning a model into multiple distinct portions that are separately deployed to a remote edge system and a user device, as required by claim 1. The applicant further argues that claim-specific limitations, such as those recited in claim 3 relating to the server computer system not receiving private user information, and in claim 5 relating to the remote computer system being an edge server that operates within the same local area network as the user device, are not taught or suggested by Liu. With respect to dependent claims that recite authentication-specific features, such as claims directed to possession and inherence authentication factors and threshold-based determinations, the applicant argues that neither Yang nor Liu discloses or suggests generating separate authentication scores using different model portions while restricting the flow of private user data as claimed. According to the applicant, the examiner has not provided a sufficient rationale explaining why a person of ordinary skill in the art would have been motivated to combine Yang and Liu in a manner that yields the claimed architecture and claim-specific limitations, absent hindsight reconstruction. Response to Argument 4: The examiner has considered the argument set forth above and acknowledges that, in light of the amendments and newly added claims, the obviousness mapping has been updated to more explicitly address the claimed distribution of model portions, network placement, and data access restrictions. However, despite these changes, the amended claims remain unpatentable over Yang in view of Yuan. As set forth in the current mapping, Yang teaches generating multiple scores for a transaction request using distributed model parameters held by different parties, including generating intermediate scores at non-server entities and aggregating those scores at a central server to determine whether to grant a transaction request. Yuan further teaches a hierarchical federated learning architecture in which user devices and edge-related components operate within a local area network, while a central server communicates over a wide area network, and in which private user data is intentionally not transmitted outside the local environment. The applicant’s characterization of Yang as limited to encrypted gradients or training-only collaboration is not persuasive, as Yang expressly discloses inference-time collaboration where different parties retain model portions and compute values that are transmitted to a central entity for evaluation. When combined with Yuan’s explicit LAN-WAN orchestration and privacy-preserving data locality teachings, the cited references collectively teach or suggest the claimed three-portion federated architecture, the LAN-based placement of the remote computer system and user device, the WAN coupling to the server, and the prevention of private user information transmission outside the LAN. Accordingly, the architectural and data flow limitations recited in claim 1 and carried through dependent claims such as claims 3 and 5 are taught or suggested by the combination, and the amendments do not overcome the rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 and 15-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Claim 1 is directed to a method, which is one of the four statutory categories of invention. Therefore, claim 1 satisfies Step 1. Step 2A Prong 1: (a) “generating … a server score based on the set of transaction request evaluation factors” -- This limitation is directed to analyzing information to generate a score, which constitutes a mental process, as it involves evaluation and judgment that can be performed in the human mind. (b) “based on the remote score, the device score, and the server score, determining … whether to grant the particular subsequent transaction request” -- This limitation is directed to decision-making based on evaluated information, which constitutes a mental process because it involves comparing evaluation results and making a determination. (e) “first, second, and third portions of a federated machine learning model … trained at a training computer system using a dataset of previous transaction requests” -- This limitation is directed to implementing the judicial exception using a machine learning model, which involves mathematical concepts and data analysis. Training and deploying a machine learning model is well-understood, routine, and conventional and does not improve the functioning of the computer itself. Step 2A Prong 2 and Step 2B: (a) “receiving … an indication of a particular subsequent transaction request … and a set of transaction request evaluation factors” -- This limitation is directed to receiving and collecting information, which constitutes mere data gathering and insignificant extra-solution activity that does not integrate the judicial exception into a practical application, nor provide significantly more (see MPEP 2106.05(g)) and MPEP 2106.05(d)(II)). (b) “receiving … a remote score generated at the remote computer system”-- This limitation is directed to receiving previously generated information, which is a routine data receipt operation and constitutes insignificant extra-solution activity that does not integrate the judicial exception into a practical application (see MPEP 2106.05(g)). (c) “receiving … a device score generated at the user device” -- This limitation is likewise directed to receiving information and constitutes insignificant extra-solution activity that does not provide significantly more than the judicial exception (see MPEP 2106.05(g)). (d) “a server computer system,” “a remote computer system,” “by the computer device”, “at the server computer system from the remote system” and “a user device -- These limitations are directed to generic computer components recited at a high level of generality. They amount to no more than instructions to apply the judicial exception using generic computer technology and therefore do not integrate the judicial exception into a practical application, nor provide significantly more (see MPEP 2106.05(f)). (e) “the first portion is located at the remote computer system,”, “the second portion is located at the user device,” and “the third portion is located at the server computer system”-- These limitations merely specify where software is executed, which constitutes a field-of-use or environmental limitation that does not integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). (f) “the remote computer system and the user device are located in the same local area network”-- This limitation defines a network environment and does not reflect an improvement to computer functionality. (g) “outside of which the first and second private user information is not transmitted”-- This limitation is directed to a data transmission restriction, which constitutes insignificant extra-solution activity related to data handling and does not integrate the judicial exception into a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(g) and MPEP 2106.05(d)(II)). (h) “the server computer system is coupled to the remote computer system via a wide area network” -- This limitation recites conventional network communication, which is well-understood, routine, and conventional activity and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(g) and MPEP 2106.05(d)(II)). Thus, claim 1 is non-patent eligible. Claims 15 and 21 are analogous to claim 1 (aside from claim type), and thus the rejection can apply to both. Regarding claim 2: Step 1: The claim is directed to a method, which is one of the four statutory categories of invention. Therefore, claim 2 satisfies step 1. Step 2A Prong 1: “preparing…using the indication of the revised version of the first portion,” - This limitation recites a mental process of updating/revising a model, which can be performed in the human mind or with pen and paper. Step 2A Prong 2 and step 2B: The claim recites the following additional elements: “by the server computer system” - This limitation recites any limitation will merely be applied to a computer in a generic manner, which cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(f)). “receiving, at the server computer system from the remote computer system, an indication of a revised version of the first portion of the federated machine learning model” – This limitation recites receiving an indication of a revised version of a portion of the federated ML model, which is considered to be to amount to mere data gathering and data outputting, which are forms of insignificant extra-solution activity, which cannot be integrated to a practical application (MPEP 2106.05(g)). Furthermore, under step 2B, the act of receiving data from one computer system to another is a is well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). “an updated federated machine learning model, wherein the preparing includes preparing an updated remote portion of the federated machine learning model;” – This limitation recites updating information of a machine learning model. Updating data/information is a form of mere data gathering (Selecting a particular data source or type of data to be manipulated), which is considered an insignificant, extra-solution activity, and cannot be integrated to a practical application (see MPEP 2106.05(g)(v)). Furthermore, under step 2B, the act of updating selected/gathered data for the federated ML model is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). “sending, from the server computer system to a different, second remote computer system, the updated first portion of the federated machine learning model” – This limitation is directed to transmitting and receiving data, which are considered insignificant extra-solution activities that cannot integrate the abstract ideas into a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of sending receiving data over a network (server computer system to the remote computer system) is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 2 is non-patent eligible. Regarding claim 3: Step 1: The claim is directed to a method, which is one of the four statutory categories of invention. Therefore, claim 3 satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The method of claim 1, wherein the server computer system does not receive the first private user information and the second private user information” --The limitation is directed to the server computer system first recited in claim 1 will further be limited to not receive the first/second private user information. The limitation does not amount to no more than merely further limiting to a field of use/environment, and thus it does not integrate to a practical application, nor provide significantly more than a judicial exception (see MPEP 2106.05(h)). Therefore, claim 3 is non-patent eligible. Regarding claim 4, Step 1: The claim is directed to a method, which is one of the four statutory categories of invention. Therefore, claim 4 satisfies step 1. Claim 4 is directed to having a set of factors that will include personally identifiable information (PID), now amended to include just the second private user information about a user once using the device. This limitation is directed to a mental process, as identifying and categorizing information as personally identifiable can be performed in the human mind. Thus, under step 2A Prong 1, claim 4 recites an abstract idea. There are no additional elements to be evaluated under step 2A Prong 2 and step 2B. Therefore, claim 4 is non-patent eligible. Regarding claim 5, Step 1: The claim is directed to a method, which is one of four statutory categories Therefore, claim 5 satisfies Step 1. Step 2A Prong 1: The claim does not recite any limitations under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “wherein the remote computer system is an edge server” - This limitation is directed to specifying the type of computer system. This element is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component, which cannot integrate the judicial exception into a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(f)). “wherein the first private user information and the second private user information are collected by a user device.” -- This limitation is directed to specifying the source of data. This element amounts to selecting a particular data source or type of data to be manipulated, which is insignificant extra-solution activity and cannot integrate the judicial exception into a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of specifying data to be manipulated is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 5 is non-patent eligible. Regarding claim 6: Step 1: The claim is directed to a method, which is one of four statutory categories. Therefore, claim 6 satisfies step 1.Step 2A Prong 1: “wherein the remote score and server score are indicative of whether a second authentication factor for the user login request has been established” - This limitation is directed to a mental process, as determining whether an authentication factor (score vs score) has been established based on scores can be performed in the human mind.Step 2A Prong 2 and Step 2B: The claim further recites:(a) “wherein the particular transaction request is a user login request” - This limitation is directed to specifying the type of transaction request. This element amounts to selecting a particular data source or type of data to be manipulated, which is insignificant extra-solution activity and cannot integrate the judicial exception into a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of specifying/categorizing data to be manipulated is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”).(b) “wherein the particular transaction request includes a first authentication factor for the user login request” - This limitation is directed to specifying the content of the transaction request. This element amounts to selecting a particular data source or type of data to be manipulated, which is an insignificant extra-solution activity and cannot integrate the judicial exception into a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of specifying data to be manipulated is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”).Therefore, claim 6 is non-patent eligible. Regarding claim 7: Step 1: The claim is directed to a method, which is one of four statutory categories. Therefore, claim 7 satisfies step 1. Step 2A Prong 1: (a) “determining, based on the remote score, the device score, and the server scores, that a risk of granting the particular transaction request is above a risk threshold” - This limitation is directed to a mental process, as comparing a risk level to a threshold based on gathered data can be performed in the human mind. Step 2A Prong 2 and Step 2B: The claim further recites: (a) “wherein determining whether to grant the particular transaction request includes requesting additional authentication information” - This limitation is directed to extra-solution activity of requesting additional information based on the determination. This element amounts to insignificant extra-solution activity that cannot integrate the judicial exception into a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of merely granting access to data to then be manipulated is a well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 7 is non-patent eligible. Regarding claim 16: Step 1: The claim is directed to a non-transitory computer-readable medium having program instructions, which is one of four statutory categories. Therefore, claim 16 satisfies step 1. Step 2A Prong 1: (a) “wherein the first portion and the second portion are useable to generate the one or more edge server scores and the one or more user device scores by analyzing the first private user data and the second private user data- This limitation covers performance of the limitation in the mind or with pen and paper. A person could mentally analyze a set of factors and generate scores. Step 2A Prong 2 and Step 2B: The claim further recites: (a) “without sending the first private user data and the second private user data to the server computer system” - This limitation is merely indicating a field of use or technological environment in which to apply the judicial exception. It does not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. (see MPEP 2106.05(h)). Therefore, claim 16 is non-patent eligible. Regarding claim 17: Step 1: The claim is directed to a non-transitory computer-readable medium having program instructions, which is one of four statutory categories. Therefore, claim 17 satisfies step 1. Step 2A Prong 1: “wherein the first portion of the federated machine learning model is useable to generate the one or more edge server scores for a particular transaction request from a particular user device based on: analyzing a set of factors for the particular transaction request collected by the particular user device … and analyzing the set of factors for other transaction requests collected by the other user devices” -- This limitation is directed to analyzing information and generating scores based on that analysis, which constitutes a mental process, as a person could mentally analyze sets of factors associated with transaction requests and determine corresponding scores. Step 2A Prong 2 and Step 2B: “sending … the second portion of the federated machine learning model to a plurality of other user devices, wherein the user device and the other user devices are associated with a particular entity” --This limitation is directed to transmitting software or data between computing devices, which constitutes mere data transmission and is considered insignificant extra-solution activity. This limitation does not integrate the judicial exception into a practical application or provide significantly more than the abstract idea (see MPEP 2106.05(g)). Furthermore, under Step 2B, transmitting model portions to other user devices is well-understood, routine, and conventional activity in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 17 is non-patent eligible. Regarding claim 18: Step 1: The claim is directed to a non-transitory computer-readable medium having program instructions, which is one of four statutory categories. Therefore, claim 18 satisfies step 1. Step 2A Prong 1: There are no elements evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “wherein the response to the particular transaction request is sending a step-up challenge” -- This limitation is merely specifying a type of response, which amounts to insignificant extra-solution activity. It does not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of selecting data to be manipulated is a well-understood, routine and conventional activity (WURC), in the field of machine learning and data processing (see MPEP 2106.05(d)(II)). (b) “sending to the user device, the step-up challenge; and “receiving, from the user device, a solution to the step-up challenge” - This limitation amounts to mere data transmission, which is insignificant extra-solution activity. It does not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, is well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 18 is non-patent eligible. Regarding claim 19: Step 1: The claim is directed to a non-transitory computer-readable medium having program instructions, which is one of four statutory categories. Therefore, claim 19 satisfies step 1. Step 2A Prong 1: There are no elements evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: The claim recites an additional element: (a) “wherein the step-up challenge includes a request for information sent to a user of the user device via a different communication channel than a communication channel used by the server computer system and the user device to communicate” - This limitation is merely specifying the type and method of sending the step-up challenge, which amounts to insignificant extra-solution activity. It does not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of selecting data to be manipulated is a well-understood, routine and conventional activity (WURC), in the field of machine learning and data processing (see MPEP 2106.05(d)(II)). Therefore, claim 19 is non-patent eligible. Regarding claim 20: Step 1: The claim is directed to a non-transitory computer-readable medium having program instructions, which is one of four statutory categories. Therefore, claim 20 satisfies step 1. Step 2A Prong 1: There are no elements evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “wherein sending the first portion to the edge server includes using the wide-area network; and wherein receiving the one or more edge server scores includes using the wide-area network” - This limitation is merely specifying the type of network used for data transmission, which amounts to insignificant extra-solution activity. It does not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception (see MPEP 2106.05(g)). Furthermore, under step 2B, is well-understood, routine, and conventional activity (WURC) in the field of machine learning and data processing (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”). Therefore, claim 20 is non-patent eligible. Regarding claim 22, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. Step 2A Prong 1: “generate remote scores that are indicative of a possession authentication factor based on device information about the user device “ -- The limitation is directed to evaluating device information about the device to determine a possession authentication factor constitutes evaluation of information and classification, which is a process that can be performed in the human mind using evaluation, observation, and judgement, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: “and wherein the server computer system does not receive the device information” -- This limitation recites a data access restriction, which amounts to no more than mere further limiting to a field of use/environment, and thus it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). “wherein the first portion of the federated machine learning model is useable” -- The limitation recites that the first portion of the federated machine learning model is usable to complete the abstract idea that was evaluated above in Prong 1. The limitation is directed to mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). Therefore, claim 22 is non-patent eligible. Regarding claim 23, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. Step 2A Prong 1: “to generate remote scores that are indicative of an inherence authentication factor based on user behavior information about how a user has used the user device;” -- The limitation is directed to evaluating user behavior information to determine an inherence factor based on behavior information on how the user interacts with the device. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, with pen and paper, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: “wherein the first portion of the federated machine learning model is useable” -- Similar to above, the limitation is directed to mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). “and wherein the server computer system does not receive the user behavior information used to generate the remote scores.” -- Similar to above, the limitation is directed to data access restriction, which amounts to no more than mere further limiting to a field of use/environment, and thus it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). Therefore, claim 23 is non-patent eligible. Regarding claim 24, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 21, wherein the federated machine learning model is trained by the training computer system by: applying a first subset of factors from the dataset to the second portion of the federated machine learning model.” -- The limitation is directed to training a federated machine learning model by applying a subset of factors from a dataset to another portion of the model. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of applying/transmitting data over a network is a well-understood, routine, and conventional activity (WURC) and it does not provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Therefore, claim 24 is non-patent eligible. Regarding claim 25, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 21, wherein the server computer system does not receive the first private user information and the second private user information.” -- The limitation recites that the server computer will further not receiving the first and second private user information. The limitation amounts to no more than mere further limiting to a field of use/environment, and thus it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). Therefore, claim 25 is non-patent eligible. Regarding claim 26, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. Step 2A Prong 1: “The system of claim 21, wherein determining whether to grant the particular transaction request includes: determining whether an inherence authentication factor has been established for the particular transaction request using the remote score, the server score, and an inherence authentication factor threshold; and determining whether a possession authentication factor has been established for the particular transaction request using the remote score, the server score, and a possession authentication factor threshold.” -- The limitation is directed to comparing values to thresholds and making determinations on transaction requests using past/collected values and data. The limitation is directed to a process that can performed in the human mind using evaluation, observation, and judgement, and thus it is directed to mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Therefore, claim 26 is non-patent eligible. Regarding claim 27, Step 1: The claim depends from claim 21 and is directed to a system, which falls under a machine. The claim satisfies Step 1. Step 2A Prong 1: “and wherein determining whether to grant the particular transaction request includes comparing the one or more remote scores to the authentication factor thresholds.” -- The limitation is directed to determining whether to grant a particular request by comparing remote scores to factor thresholds, which is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, and thus it is directed to mental process. Step 2A Prong 2 and Step 2B: “The system of claim 21, wherein the third portion of the federated machine learning model includes a plurality of authentication factor thresholds,” -- The limitation recites that the third portion of the fed ML model will further include a group of authentication factor thresholds, for which is an insignificant, extra-solution activity that cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Therefore, claim 27 is non-patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3, 4, 5, 15, 21, 24, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over NPL reference “Federated Machine Learning: Concept and Applications”, by Yang et. al. (referred herein as Yang) in view of NPL reference “Hierarchical federated learning through LAN-WAN orchestration” by Yuan et. al. (referred herein as Yuan). Regarding claim 1, Yang teaches: A method for evaluating transaction requests at a server computer system based on model scores generated by different portions of a federated machine learning model, the method comprising: ([Yang, page 5] “In this section, we discuss how to categorize federated learning based on the distribution characteristics of the data. Let matrix Di denote the data held by each data owner i … For example, in the financial field, labels may be users’ credit; in the marketing field, labels may be the user’s purchase desire;” wherein the examiner interprets “federated learning” and “data held by each data owner” to be the same as “computer system … of a federated machine learning” model because data needs to be stored at a server and both use federated machine learning (ML). The examiner further interprets “labels may be users’ credit” to be the same as “scores” because they both are metrics of certain class of objects.) receiving, at the server computer system from a remote computer system, ([Yang, page 10, Table 2] “Compute uA i and sends to C.” and “Compute uB i and sends to C.”, wherein the examiner interprets “sends to C” to be the same as “receiving, at the server computer system from a remote computer system” because they are both directed to transmitting computed information associated with the identified instance i from a distributed party to a central entity C that receives the transmitted information for evaluation.) an indication of a particular subsequent transaction request submitted at a user device ([Yang, page 10, Table 2] “Sends user ID i to A and B.”, wherein the examiner interprets “user ID i” to be the same as “an indication of a particular subsequent transaction request submitted at a user device” because they are both directed to an identifier for a specific instance (i) being evaluated in a distributed evaluation process.) and a set of transaction request evaluation factors for the particular transaction request; ([Yang, page 7] “Vertical federated learning or feature-based federated learning (Figure 2(b)) is applicable to the cases in which two datasets share the same sample ID space but differ in feature space.”, wherein the examiner interprets “differ in feature space” to be the same as “a set of transaction request evaluation factors” because both are directed to feature values (factors) used as inputs to evaluate the same identified instance (sample ID / user ID) under the broadest reasonable interpretation.) receiving, at the server computer system from the remote computer system, [[(c)]] a remote score generated at the remote computer system using ([Yang, page 10, Table 2] “Compute uA i and sends to C.” wherein the examiner interprets “Compute uA i” to be the same as “a remote score generated at the remote computer system” because they are both directed to a party computing a score/value for an identified instance i, and wherein the examiner interprets “sends to C” to be the same as “receiving, at the server computer system from a remote computer system” because they are both directed to transmitting the computed score/value to a central entity C that receives the computed score/value.) wherein the remote score is generated using the first portion of the federated machine learning model, and wherein the first portion is located at the remote computer system; ([Yang, page 7] “At the end of learning, each party holds only those model parameters associated to its own features. Therefore, at inference time, the two parties also need to collaborate to generate output”, wherein the examiner interprets “each party holds … model parameters associated to its own features” to be the same as the “first portion … located at the remote computer system” because both describe a distributed setting where a non-server party retains a local model portion and uses it to generate a local output.) receiving, at the server computer system from the remote computer system, a device score generated at the user device ([Yang, page 10] “Let uA i = ΘA xA i , uB i = ΘB xBi … Compute uBi and sends to C.” wherein the examiner interprets C to be the same as “receiving, at the server computer server system” because C is the receiving/combining entity and is a note that received output from A and B. The examiner further interprets uBi to be the same as “a device score generated at the user device” because uBi is a computed value, produced locally at party B, based on party B’s feature vector 𝑥𝐵𝑖 and parameters Θ𝐵 and then transmitted to B.) wherein the device score is generated using the second portion of the federated machine learning model, wherein the second portion is located at the user device, ([Yang, page 11] “At the end of the training process, each party (A or B) remains oblivious to the data structure of the other party and obtains the model parameters associated only with its own features.” and “At inference time, the two parties need to collaboratively compute the prediction results with the steps shown in Table 2, which still do not lead to information leakage.”, wherein the examiner interprets Party B’s held parameters and local inference to be the same as the “second portion” located at the user device generating the device score.) generating, by the server computer system, a server score based on the set of transaction request evaluation factors for the particular subsequent transaction request, ([Yang, page 10, Table 2] “Inquistor: Gets results uAi + uBi”, wherein the examiner interprets the “result” obtained at C to be the same as the server-side score/result produced using the server-side portion of the overall federated evaluation pipeline.) and based on the remote score, the device score, and the server score, determining, by the server computer system, whether to grant the particular transaction request. (([Yang, page 10, Table 2] “Compute uA i and sends to C.” and [Yang, page 5] “In this section, we discuss how to categorize federated learning based on the distribution characteristics of the data. Let matrix Di denote the data held by each data owner i … For example, in the financial field, labels may be users’ credit; in the marketing field, labels may be the user’s purchase desire;” wherein the examiner interprets uAi to be the same as remote score and purchase prediction to be the same as “whether to grant the particular transaction request’ because they both predict the user’s purchase.) Yang does not teach first private user information that is not accessible to the server computer system… wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests,… using second private user information that is not accessible to the server computer system or the remote computer system,…and wherein the remote computer system and the user device are located in the same local area network…outside of which the first and second private user information is not transmitted;…wherein the server score is generated using the third portion of the federated machine learning model, wherein the third portion is located at the server computer system,…and wherein the server computer system is coupled to the remote computer system via a wide area network,. Yuan teaches: first private user information that is not accessible to the server computer system ([Yuan, page 1] “Federated learning (FL) was designed to enable mobile phones to collaboratively learn a global model without uploading their private data to a cloud server.”, wherein the examiner interprets “without uploading their private data to a cloud server” to be the same as “first private user information that is not accessible to the server computer system” because both are directed to preventing the server/cloud from obtaining the user’s private data.) wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests, ([Yuan, page 4] “Overall, LanFL also adopts a C/S architecture, where a central server maintains and keeps advancing a global model ω, but the “client” refers to not only one device but an LAN domain comprising many devices connected to each other through P2P mechanism.”wherein the examiner interprets “central server maintains and keeps advancing a global model ω” to be the same as training at a training computer system using a dataset because both are directed to iterative model improvement at a central server/training system.) using second private user information that is not accessible to the server computer system or the remote computer system, ([Yuan, page 2] “Besides, throughout the learning process, no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “no raw data leaves its host device so the user privacy is preserved” to be the same as “private user information that is not accessible to the server computer system or the remote computer system” because they are both protecting privacy of the user’s information.) and wherein the remote computer system and the user device are located in the same local area network ([Yuan, page 2] “FL applications are commonly deployed on a large number of devices that are naturally organized into many LAN domains.” and [Yuan, page 2] “the devices within the same LAN domain frequently exchange model updates (weights) and train a locally shared model.”, wherein the examiner interprets “devices … organized into … LAN domains” and “devices within the same LAN domain” to be the same as “the remote computer system and the user device are located in the same local area network” because they are all directed to multiple devices operating within a common local network domain (LAN).) outside of which the first and second private user information is not transmitted; ([Yuan, page 2] “no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “no raw data leaves its host device” to be the same as “outside of which the first and second private user information is not transmitted” because they are both directed to preventing private user information (raw data) from being transmitted outside the local device/local network environment.) wherein the server score is generated using the third portion of the federated machine learning model, wherein the third portion is located at the server computer system, ([Yuan, page 4] “The 𝑓 (𝜔) in Eq. 2 and F𝑖(𝜔) in Eq. 3 are respectively calculated on cloud (across WAN) and on aggregating device in each LAN 𝑖 (across LAN).” wherein the examiner interprets “calculated on cloud” to be the same as a server-side computation portion distinct from LAN-side computation.) and wherein the server computer system is coupled to the remote computer systemvia a wide area network,. ([Yuan, page 4] “Algorithm 1: push 𝜔𝑡 to 𝐶𝑘 across WAN … in Algorithm 1, the aggregation is performed on cloud with the 𝐹𝑘 (𝜔) sent by each device 𝑘 across WAN. However, a critical fact is ignored that most devices are distributed across different LANs. Those devices in the same LAN can adopt P2P communication mechanism to aggregate models”, wherein the examiner interprets “across WAN” communications between cloud and devices (including LAN-side aggregation participants) to be the same as the server computer system being coupled to the edge server via a wide area network.) Yang, Yuan, and the instant application are analogous art because they are all directed to distributed federated machine learning architectures for evaluating transaction requests. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Yang to include the local area network–based federated deployment architecture disclosed by Yuan. One would be motivated to do so to efficiently reduce wide-area network communication overhead while preserving user privacy during distributed model training and inference, as suggested by Yuan ([Yuan, page 2] “FL applications are commonly deployed on a large number of devices that are naturally organized into many LAN domains” and “no raw data leaves its host device so the user privacy is preserved as the original FL protocol”). Regarding claim 3, Yang, and Yuan teach The method of claim 1, (see rejection for claim 1). Yuan further teaches wherein the server computer system does not receive the first private user information and the second private user information. ([Yuan, page 4] “Through this hierarchical design, WAN-based aggregation is much less needed and thus the learning can be accelerated. Besides, throughout the learning process, no raw data leaves its host device so the user privacy is preserved as the original FL protocol.” wherein the examiner interprets “no raw data leaves its host device” to be the same as “the server computer system does not receive the first private user information and the second private user information” because, under its broadest reasonable interpretation, the first private user information and the second private user information are user data hosted at their respective devices, and if such raw data does not leave the host device, then it is not transmitted to, and thus not received by, the server computer system.) Yang, Yuan, and the instant application are analogous art because they are all directed to federated machine learning systems in which model training and inference are performed across distributed computing components. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Yang and Yuan to include the privacy-preserving data isolation architecture disclosed by Yuan. One would be motivated to do so to effectively preserve user privacy while accelerating distributed learning by reducing unnecessary wide-area data transmission, as suggested by Yuan ([Yuan, page 4] “no raw data leaves its host device so the user privacy is preserved as the original FL protocol”). Regarding claim 4, Yang and Yuan teach The method of claim 3, (see rejection for claim 3). Yang further teaches wherein the second private user information includes personally identifiable information about a user of the user device. ([Yang, page 7] “However, since the bank records the user’s revenue and expenditure behavior and credit rating and the e-commerce retains the user’s browsing and purchasing history, their feature spaces are very different.”, wherein the examiner interprets “the user’s revenue and expenditure behavior and credit rating” and “the user’s browsing and purchasing history” to be the same as “personally identifiable information about a user of the user device” because they are all directed to user-specific information that identifies and/or characterizes a particular user and therefore constitutes personally identifiable information under the broadest reasonable interpretation.) Regarding claim 5, Yang and Yuan teach The method of claim 1, (see rejection for claim 1). Yuan further teaches: wherein the remote computer system is an edge server, ([Yuan, page 1] “While there’re a few preliminary efforts in edge-assisted hierarchical FL [8, 14, 33, 34] those approaches are built atop a cluster of edge servers …”, wherein the examiner interprets “a cluster of edge servers” to be the same as “the remote computer system is an edge server” because they are both directed to an edge-computing tier (edge server(s)) used in hierarchical federated learning architectures outside of the user device and distinct from the central server.) and wherein the first private user information and the second private user information are collected by the user device. ([Yuan, page 4, Algorithm 1] “train ωt on local training dataset of Ck …”, wherein the examiner interprets “local training dataset of Ck” to be the same as “private user information … collected by the user device” because they are both directed to user/device-local data that is possessed at the device (Ck) and used for learning/inference under the broadest reasonable interpretation.) Yang, Yuan, and the instant application are analogous art because they are all directed to federated learning architectures in which transaction or user-related evaluations are performed. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Yang and Yuan to include the system such that the first private user information is collected by the user device, as disclosed by Yuan. One would be motivated to do so to preserve user privacy and enable local model training and inference using device-resident data, as suggested by Yuan ([Yuan, page 4, Algorithm 1] “local training dataset of Ck”). Regarding claim 15, Yang teaches: A non-transitory computer-readable medium having program instructions stored thereon that are executable by a server computer system, configured to evaluate transaction request based on scores generated using different portions of a federated machine learning model, to perform operations comprising: ([Yang, page 5] “In this section, we discuss how to categorize federated learning based on the distribution characteristics of the data. Let matrix Di denote the data held by each data owner i … For example, in the financial field, labels may be users’ credit; in the marketing field, labels may be the user’s purchase desire;”, wherein the examiner interprets “labels may be users’ credit” and “purchase desire” to be the same as “scores generated” used to evaluate a “transaction request” because both are metrics used to evaluate whether a user action should be approved/denied under the broadest reasonable interpretation.) receiving, from an edge server at the server computer system, [[(a)]] a particular subsequent transaction request submitted at a user device and metadata ([Yang, page 10, Table 2] “Sends user ID i to A and B.”, wherein the examiner interprets “user ID i” to be the same as “a particular subsequent transaction request … and metadata” because the user ID identifies the particular instance i being evaluated and functions as request-identifying information (metadata) associated with that request under the broadest reasonable interpretation.) (b) a first set of factors for the particular subsequent transaction request, ([Yang, page 7] “Vertical federated learning or feature-based federated learning (Figure 2(b)) is applicable to the cases in which two datasets share the same sample ID space but differ in feature space.”, wherein the examiner interprets “differ in feature space” to be the same as “a first set of factors” because both are feature values (factors) used as input to evaluate the same identified instance (sample ID/user ID).) receiving, from the edge server. [[(c)]] one or more edge server scores generated for the particular subsequent transaction request ([Yang, page 10, Table 2] “Compute uA i and sends to C.”, wherein the examiner interprets “Compute uA i” to be the same as “one or more edge server scores” because both are computed score/value outputs for the identified instance i generated at a non-server node and then transmitted for evaluation.) wherein the one or more edge server scores are generated using the first portion of the federated machine learning model, and wherein the first portion is located at the edge server; ([Yang, page 7] “At the end of learning, each party holds only those model parameters associated to its own features.” and [Yang, page 10, Table 2] “Compute uA i and sends to C.”, wherein the examiner interprets “each party holds only those model parameters” to be the same as the “first portion … located at the edge server” because both require that the non-server party retains a local portion of the model parameters and uses them to generate local outputs (scores). The examiner further interprets the computation of uA i at Party A to be generation of the edge-side score using that party’s held portion.) receiving, from the edge server, one or more user device scores generated at the user device ([Yang, page 10, Table 2] “Compute uB i and sends to C.”, wherein the examiner interprets “Compute uB i” to be the same as “one or more user device scores generated at the user device” because both are device-side computed score/value outputs for the identified instance i.) wherein the one or more device scores are generated using the second portion of the federated machine learning model, wherein the second portion is located at the user device, ([Yang, page 11] “At the end of the training process, each party (A or B) remains oblivious to the data structure of the other party and obtains the model parameters associated only with its own features.” and [Yang, page 11] “At inference time, the two parties need to collaboratively compute the prediction results with the steps shown in Table 2, which still do not lead to information leakage.”, wherein the examiner interprets Party B “obtains the model parameters associated only with its own features” to be the same as the “second portion … located at the user device,” because the user device retains a portion of the model parameters and uses that portion for local computation. The examiner further interprets wherein the examiner interprets “steps shown in Table 2” to include the local computations used to generate uB i at the user device.) generating, one or more server scores based on the metadata for the particular transaction request, ([Yang, page 10, Table 2] “Inquistor: Gets results uA i + uB i”, wherein the examiner interprets the “result” obtained by the Inquisitor to be the same as generating the server-side score/result used for the transaction evaluation, and wherein the “user ID i” used in the Table 2 flow is interpreted as request-identifying information (metadata) associated with the evaluated instance.) determining a response to the particular transaction request based on the one or more server scores, the one or more edge server scores, and the one or more user device scores. ([Yang, page 10, Table 2] “Inquistor: Gets results uA i + uB i” and [Yang, page 5] “labels may be users’ credit; … labels may be the user’s purchase desire;”, wherein the examiner interprets “Gets results uA i + uB i” to be the same as determining the response based on the server-side combination of the edge-side score(s) and device-side score(s). The examiner further interprets “label” outputs to be the basis for a transaction decision/response because they represent decision-driving evaluation results.) Yang does not teach using first private user data that is not accessible to the server computer system,…wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests, using second private user data that is not accessible to the server computer system or the edge server,… wherein the one or more server scores are generated using the third portion of the federated machine learning model, …wherein the third portion is located at the server computer system, and wherein the server computer system is coupled to the edge server via a wide area network;…and wherein the edge server and the user device are located in the same local area network outside of which the first and second private user data is not transmitted;. Yuan teaches: using first private user data that is not accessible to the server computer system, ([Yuan, page 1] “Federated learning (FL) was designed to enable mobile phones to collaboratively learn a global model without uploading their private data to a cloud server.”, wherein the examiner interprets “without uploading their private data to a cloud server” to be the same as “first private user data that is not accessible to the server computer system” because both expressly prevent the server/cloud from obtaining the private data.) wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests, ([Yuan, page 4] “Overall, LanFL also adopts a C/S architecture, where a central server maintains and keeps advancing a global model ω, but the “client” refers to not only one device but an LAN domain comprising many devices connected to each other through P2P mechanism.”, wherein the examiner interprets “central server maintains and keeps advancing a global model ω” to be the same as the model being trained at a training computer system, and the distributed “client” side being partitioned across multiple nodes (portions).) using second private user data that is not accessible to the server computer system or the edge server, ([Yuan, page 2] “Besides, throughout the learning process, no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “no raw data leaves its host device” to be the same as “second private user data that is not accessible to the server computer system or the edge server” because both prevent private user data from being provided to any external node.) wherein the one or more server scores are generated using the third portion of the federated machine learning model, wherein the third portion is located at the server computer system, ([Yuan, page 4] “The 𝑓 (𝜔) in Eq. 2 and F𝑖(𝜔) in Eq. 3 are respectively calculated on cloud (across WAN) and on aggregating device in each LAN 𝑖 (across LAN).”, wherein the examiner interprets “calculated on cloud (across WAN)” to be the same as server-side computation performed by a server-located portion of the model that is distinct from LAN-side computation.) and wherein the server computer system is coupled to the edge server via a wide area network; and ([Yuan, page 4] “Algorithm 1: push 𝜔𝑡 to 𝐶𝑘 across WAN … in Algorithm 1, the aggregation is performed on cloud with the 𝐹𝑘 (𝜔) sent by each device 𝑘 across WAN.”, wherein the examiner interprets “across WAN” communications between cloud and devices (including LAN-side aggregation participants) to be the same as the server computer system being coupled to the edge server via a wide area network.) and wherein the edge server and the user device are located in the same local area network outside of which the first and second private user data is not transmitted; ([Yuan, page 2] “FL applications are commonly deployed on a large number of devices that are naturally organized into many LAN domains… As shown in Figure 1, the devices within the same LAN domain frequently exchange model updates (weights) and train a locally shared mode” and [Yuan, page 2] “no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “devices … organized into … LAN domains” and “devices within the same LAN domain” to be the same as “the edge server and the user device are located in the same local area network.” The examiner further interprets “no raw data leaves its host device” to be the same as “outside of which the first and second private user data is not transmitted.”) Yang, Yuan, and the instant application are analogous art because they are all directed to federated machine learning architectures for evaluating transaction requests using distributed scoring generated across multiple computing components while preserving user data privacy. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to implement the federated transaction evaluation technique disclosed by Yang to include the hierarchical LAN/WAN-based federated learning architecture disclosed by Yuan. One would have been motivated to do so to efficiently preserve user privacy while enabling scalable and timely transaction evaluation, as suggested by Yuan ([Yuan, page 1-2] “without uploading their private data to a cloud server” and that “no raw data leaves its host device so the user privacy is preserved”). Regarding claim 21, Yang teaches: A server computer system, comprising: a processor, and a non-transitory computer-readable medium having stored thereon instructions that are executable by the processor to cause the server computer system, configured to evaluate transaction requests based on scores generated using different portions of a federated machine learning model, to perform operations comprising: ([Yang, page 5] “In this section, we discuss how to categorize federated learning based on the distribution characteristics of the data. Let matrix Di denote the data held by each data owner i … For example, in the financial field, labels may be users’ credit; in the marketing field, labels may be the user’s purchase desire;”, wherein the examiner interprets “labels may be users’ credit” and “purchase desire” to be the same as “scores” used to evaluate transaction requests because both are metrics used for evaluating a user-related decision.) receiving, from a remote computer system, an indication of a particular transaction request submitted at a user device and a set of transaction request evaluation factors for the particular transaction request, ([Yang, page 10, Table 2] “Sends user ID i to A and B.” and [Yang, page 7] “Vertical federated learning or feature-based federated learning (Figure 2(b)) is applicable to the cases in which two datasets share the same sample ID space but differ in feature space.”, wherein the examiner interprets “user ID i” to be the same as “an indication of a particular transaction request submitted at a user device” because they are both directed to an identifier for a specific instance (i) being evaluated in a distributed evaluation process. The examiner further interprets “differ in feature space” to be the same as “a set of transaction request evaluation factors” because both are directed to feature values (factors) used as inputs to evaluate the same identified instance (sample ID / user ID) under the broadest reasonable interpretation.) receiving, from the remote computer system, a remote score generated at the remote computer system ([Yang, page 10, Table 2] “Compute uA i and sends to C.”, wherein the examiner interprets “Compute uA i” to be the same as “a remote score generated at the remote computer system” because they are both directed to a party computing a score/value for an identified instance i.) wherein the remote score is generated using the first portion of the federated machine learning model, and wherein the first portion is located at the remote computer system; ([Yang, page 7] “At the end of learning, each party holds only those model parameters associated to its own features. Therefore, at inference time, the two parties also need to collaborate to generate output”, wherein the examiner interprets “each party holds … model parameters associated to its own features” to be the same as the “first portion … located at the remote computer system” because both describe a distributed setting where a non-server party retains a local model portion and uses it to generate a local output.) receiving, from the remote computer system, a device score generated at the user device ([Yang, page 10] “Let uA i = ΘA xA i , uB i = ΘB xBi … Compute uBi and sends to C.” wherein the examiner interprets C to be the same as “receiving, at the server computer server system” because C is the receiving/combining entity and is a note that received output from A and B. The examiner further interprets uBi to be the same as “a device score generated at the user device” because uBi is a computed value, produced locally at party B, based on party B’s feature vector 𝑥𝐵𝑖 and parameters Θ𝐵 and then transmitted to B.) wherein the device score is generated using the second portion of the federated machine learning model, and wherein the second portion is located at the user device; ([Yang, page 11] “At the end of the training process, each party (A or B) remains oblivious to the data structure of the other party and obtains the model parameters associated only with its own features.” and “At inference time, the two parties need to collaboratively compute the prediction results with the steps shown in Table 2, which still do not lead to information leakage.”, wherein the examiner interprets Party B’s held parameters and local inference to be the same as the “second portion” located at the user device generating the device score. generating a server score based on the set of transaction request evaluation factors for the particular transaction request, ([Yang, page 10, Table 2] “Inquistor: Gets results uAi + uBi”, wherein the examiner interprets the “result” obtained at C to be the same as the server-side score/result produced using the server-side portion of the overall federated evaluation pipeline.) and based on the remote score, the device score, and the server score, determining whether to grant the particular transaction request. ([Yang, page 10, Table 2] “Compute uA i and sends to C.” and [Yang, page 5] “In this section, we discuss how to categorize federated learning based on the distribution characteristics of the data. Let matrix Di denote the data held by each data owner i … For example, in the financial field, labels may be users’ credit; in the marketing field, labels may be the user’s purchase desire;” wherein the examiner interprets uAi to be the same as remote score and purchase prediction to be the same as “whether to grant the particular transaction request’ because they both predict the user’s purchase.) Yang does not teach using first private user information that is not accessible to the server computer system,…wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests,… using second private user information that is not accessible to the server computer system or the remote computer system,…wherein the server score is generated using the third portion of the federated machine learning model, wherein the third portion is located at the server computer system,… and wherein the server computer system is coupled to the remote computer system via a wide area network,. Yuan teaches: using first private user information that is not accessible to the server computer system, ([Yuan, page 1] “Federated learning (FL) was designed to enable mobile phones to collaboratively learn a global model without uploading their private data to a cloud server.”, wherein the examiner interprets “without uploading their private data to a cloud server” to be the same as “first private user information that is not accessible to the server computer system” because both are directed to preventing the server/cloud from obtaining the user’s private data.) wherein the federated machine learning model includes first, second, and third portions that are trained at a training computer system using a dataset of previous transaction requests, ([Yuan, page 4] “Overall, LanFL also adopts a C/S architecture, where a central server maintains and keeps advancing a global model ω, but the “client” refers to not only one device but an LAN domain comprising many devices connected to each other through P2P mechanism.”, wherein the examiner interprets “central server maintains and keeps advancing a global model ω” to be the same as training at a training computer system using a dataset because both are directed to iterative model improvement at a central server/training system.) using second private user information that is not accessible to the server computer system or the remote computer system, ([Yuan, page 2] “Besides, throughout the learning process, no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “no raw data leaves its host device so the user privacy is preserved” to be the same as “private user information that is not accessible to the server computer system or the remote computer system” because they are both protecting privacy of the user’s information.) wherein the server score is generated using the third portion of the federated machine learning model, wherein the third portion is located at the server computer system, ([Yuan, page 4] “The 𝑓 (𝜔) in Eq. 2 and F𝑖(𝜔) in Eq. 3 are respectively calculated on cloud (across WAN) and on aggregating device in each LAN 𝑖 (across LAN).” wherein the examiner interprets “calculated on cloud” to be the same as a server-side computation portion distinct from LAN-side computation.) and wherein the server computer system is coupled to the remote computer system via a wide area network, ([Yuan, page 4] “Algorithm 1: push 𝜔𝑡 to 𝐶𝑘 across WAN … in Algorithm 1, the aggregation is performed on cloud with the 𝐹𝑘 (𝜔) sent by each device 𝑘 across WAN. However, a critical fact is ignored that most devices are distributed across different LANs. Those devices in the same LAN can adopt P2P communication mechanism to aggregate models”, wherein the examiner interprets “across WAN” communications between cloud and devices (including LAN-side aggregation participants) to be the same as the server computer system being coupled to the edge server via a wide area network.) Yang, Yuan, and the instant application, are analogous art because they are all directed to server-centric federated machine learning architectures that evaluate transaction requests by aggregating scores. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the server computer system disclosed by Yang to incorporate the hierarchical federated learning architecture disclosed by Yuan. One would have been motivated to do so to efficiently enable scalable transaction evaluation while preserving user privacy, as described by Yuan ([Yuan, page 1-2] “without uploading their private data to a cloud server” and that “no raw data leaves its host device so the user privacy is preserved”). Regarding claim 24, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yang further teaches wherein the federated machine learning model is trained by the training computer system by: applying a first subset of factors from the dataset ([Yang, page 7] “Vertical federated learning or feature-based federated learning (Figure 2(b)) is applicable to the cases in which two datasets share the same sample ID space but differ in feature space.”, wherein the examiner interprets “differ in feature space” to be the same as “a first subset of factors” because both are directed to a subset of feature values used as inputs for the same sample IDs in federated learning.) to the second portion of the federated machine learning model. ([Yang, page 7] “At the end of learning, each party holds only those model parameters associated to its own features.”, wherein the examiner interprets “model parameters associated to its own features” to be the same as a “portion” (e.g., the “second portion”) of the federated machine learning model because both are directed to a segmented part of the overall model that is specific to (and used with) that party’s feature subset.) Regarding claim 25, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yuan further teaches wherein the server computer system does not receive the first private user information and the second private user information. ([Yuan, page 1] “Federated learning (FL) was designed to enable mobile phones to collaboratively learn a global model without uploading their private data to a cloud server…Besides, throughout the learning process, no raw data leaves its host device so the user privacy is preserved as the original FL protocol.”, wherein the examiner interprets “without uploading their private data to a cloud server” to be the same as “the server computer system does not receive the first private user information and the second private user information” because both are directed to preventing private user information from being transmitted to, and thus received by, the server computer system. The examiner further interprets “no raw data leaves its host device” to be the same as “the server computer system does not receive the first private user information and the second private user information” because if the raw/private user information does not leave the host device, then it is not transmitted to, and therefore not received by, the server computer system.) Yang, Yuan, and the instant application are analogous art because they are all directed to hierarchical federated machine learning systems. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 21 disclosed by Yang and Yuan to include the privacy protection techniques disclosed by Yang. One would be motivated to do so to efficiently enable feature-partitioned training of different portions of a federated machine learning model while preserving data locality, as suggested by Yang (Yang, [page 7] “two datasets share the same sample ID space but differ in feature space”). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Yuan in further view of NPL reference Imteaj et. al “Distributed Sensing Using Smart End-User Devices: Pathway to Federated Learning for Autonomous IoT,” (herein as Imteaj). Regarding claim 2, this is analogous to the original claim 2, since “first portion” is the same as “remote portion” so the same rejection can apply. Claims 6, 16, 17, 20, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Yuan in further view of US 20210297258A1 by Keith et. al. (referred herein as Keith). Regarding claim 6, this is analogous to the original claim 6, so the same rejection can apply. Regarding claim 16, Yang and Yuan teaches The non-transitory computer-readable medium of claim 15, (see rejection of claim 15). Yang and Yuan do not teach wherein the first portion and the second portion are useable to generate the one or more edge server scores and the one or more user device scores by analyzing the first private user data and the second private user data transaction request without sending the first private user data and the second private user data to the server computer system. Keith teaches wherein the first portion and the second portion are useable to generate the one or more edge server scores and the one or more user device scores by analyzing the first private user data and the second private user data transaction request without sending the first private user data and the second private user data to the server computer system. ([Keith, [0078]] “In the step 902, a trust score is generated. The trust score is generated by analyzing the acquired user information. … The trust score is also able to be based on other information such as location, time, device information and other personal information.” and [Keith, [0185]] “the private token is stored locally and the public token is able to be shared … the actually data is not accessible by an external source.”, wherein the examiner interprets generating a trust score by analyzing acquired user information, while keeping the “private” information stored locally (and not accessible externally), to be the same as the claimed limitation of generating the edge server scores and user device scores by analyzing the first private user data and the second private user data without sending the first and second private user data to the server computer system, because both describe performing the scoring or evaluation using locally held, personal information rather than transmitting that private input data to a centralized server for analysis.). Yang, Yuan, Keith, and the instant application are analogous art because they are all directed to evaluating transactions and or trust or risk using distributed computing and privacy preserving handling of user data in which local devices or edge related components analyze user specific information. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory computer-readable medium of claim 15 disclosed by Yang and Yuan to include the approach where “a trust score is generated by analyzing the acquired user information” and where the “private token is stored locally” such that the underlying data is not externally accessible disclosed by Keith. One would be motivated to do so to effectively improve privacy of user specific inputs by localizing analysis of personal information rather than sending the private user data to the server computer system, as suggested by Keith ([Keith, [0185] “the private token is stored locally and the public token is able to be shared … the actually data is not accessible by an external source.”) Regarding claim 17, this is analogous to the original claim 17, so the same rejection can apply. Regarding claim 20, this is analogous to the original claim 20, so the same rejection can apply. Regarding claim 22, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yang and Yuan do not teach wherein the first portion of the federated machine learning model is useable to generate remote scores that are indicative of a possession authentication factor based on device information about the user device, and wherein the server computer system does not receive the device information to generate the remote scores. Keith teaches wherein the first portion of the federated machine learning model is useable to generate remote scores that are indicative of a possession authentication factor based on device information about the user device, ([Keith, [0204] “The sequencer calls each of the chosen modules and stores their results (score and confidence). It is the processor that evaluates all of the stored results to determine the final ID trust score. The processor logs the details used to process the trust score”, wherein the examiner interprets “results (score and confidence)” and “modules” to correspond to “user device scores” and “remote portion of the federated machine learning model,” respectively, as both describe scores derived from analyzing transaction-related data processed by individual device modules.) and wherein the server computer system does not receive the device information to generate the remote scores. ([Keith, [0004] “This token is derived on biometric factors like human behaviors, motion analytics, human physical characteristics like facial patterns, voice recognition prints, usage of device patterns, user location actions and other human behaviors which can derive a token or be used as a dynamic password identifying the unique individual with high calculated confidence. Because of the dynamic nature and the many different factors”, wherein the examiner interprets “the token derived on biometric factors processed locally” to correspond to “the server computer system does not receive the second set of factors,” as both describe a system in which sensitive factors related to user identity are processed at the device level without requiring transmission to the server.) Yang, Yuan, Keith, and the instant application are analogous art because they are all directed to federated or distributed authentication systems that generate trust or authentication scores for transaction requests. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system claim 21 disclosed by Yang and Yuan to include the scores generated by device-level modules disclosed by Keith. One would be motivated to do so to effectively improve possession-based authentication while preserving device data privacy, as suggested by Keith ([Keith, [0204] “The sequencer calls each of the chosen modules and stores their results (score and confidence). It is the processor that evaluates all of the stored results to determine the final ID trust score. The processor logs the details used to process the trust score”). Regarding claim 23, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yang and Yuan do not teach wherein the first portion of the federated machine learning model is useable to generate remote scores that are indicative of an inherence authentication factor based on user behavior information about how a user has used the user device; and wherein the server computer system does not receive the user behavior information used to generate the remote scores. Keith teaches wherein the first portion of the federated machine learning model is useable to generate remote scores that are indicative of an inherence authentication factor based on user behavior information about how a user has used the user device; ([Keith, [0204]] “The sequencer calls each of the chosen modules and stores their results (score and confidence). It is the processor that evaluates all of the stored results to determine the final ID trust score. The processor logs the details used to process the trust score”, wherein the examiner interprets “modules” to be the same as the first portion used to generate remote scores, and “results (score and confidence)” to be the same as remote scores indicative of an inherence authentication factor, because both are directed to generating authentication related scores from processing modules and using those scores to determine a final trust or authentication score.) and wherein the server computer system does not receive the user behavior information used to generate the remote scores. ([Keith, [0004]] “This token is derived on biometric factors like human behaviors, motion analytics, human physical characteristics like facial patterns, voice recognition prints, usage of device patterns, user location actions and other human behaviors which can derive a token or be used as a dynamic password identifying the unique individual with high calculated confidence.”, wherein the examiner interprets “token is derived on biometric factors like human behaviors … usage of device patterns … and other human behaviors” to be the same as generating remote scores indicative of an inherence authentication factor based on user behavior information, and further interprets “token is derived” to be consistent with the server not receiving the underlying user behavior information used to generate the score, because the token represents the derived output of analyzing those behaviors rather than requiring transmission of the underlying user behavior information to the server.) Yang, Yuan, Keith, and the instant application are analogous art because they are all directed to distributed authentication systems that generate inherence-based authentication scores. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 21 disclosed by Yang and Yuan to include the scores generated by behavior-analysis modules disclosed by Keith. One would be motivated to do so to effectively generate trust scoring of users as suggested by Keith ([Keith, [0204] “The sequencer calls each of the chosen modules and stores their results (score and confidence). It is the processor that evaluates all of the stored results to determine the final ID trust score. The processor logs the details used to process the trust score”). Claims 7, 18-19, 26, 27 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Yuan in further view of US011227036B1, by Hitchcock et. al. (referred herein as Hitchcock). Regarding claim 7, this is analogous to the original claim 7, so the same rejection can apply. Regarding claim 18, this is analogous to the original claim 18, so the same rejection can apply. Regarding claim 19, this is analogous to the original claim 19, so the same rejection can apply. Regarding claim 26, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yang and Yuan do not teach wherein determining whether to grant the particular transaction request includes: determining whether an inherence authentication factor has been established for the particular transaction request using the remote score, the server score, and an inherence authentication factor threshold; and determining whether a possession authentication factor has been established for the particular transaction request using the remote score, the server score, and a possession authentication factor threshold. Hitchcock teaches wherein determining whether to grant the particular transaction request includes: determining whether an inherence authentication factor has been established for the particular transaction request using the remote score, the server score, and an inherence authentication factor threshold; and ([Hitchcock, page 1] “A composite measure of authentication assurance is then determined from a combination of the user-level measure of authentication assurance and the account-level measure of authentication assurance. A response to the authentication request is generated based at least in part on comparing the composite measure of authentication assurance to a threshold.” and [Hitchcock, col. 5, lines 52-56] “the runtime authentication events 230 can include inherence factors, such as biometric data for the user, including a speech sample, fingerprint scan, facial image, or other biometric data”, wherein the examiner interprets “combination of the user-level measure… and the account-level measure…” to be the same as “using the remote score and the server score” because both are directed to combining multiple independently-derived score/assurance measures to evaluate a request, and wherein the examiner interprets “comparing the composite measure…to a threshold” to be the same as “using an inherence authentication factor threshold” because both are directed to applying one or more threshold criteria to determine whether the request should be granted. The examiner further interprets “inherence factors…biometric data” to be the same as “an inherence authentication factor” because they are both directed to user-intrinsic biometric authentication. ) determining whether a possession authentication factor has been established for the particular transaction request using the remote score, the server score, and a possession authentication factor threshold. ([Hitchcock, page 1] “A composite measure of authentication assurance is then determined from a combination of the user-level measure of authentication assurance and the account-level measure of authentication assurance. A response to the authentication request is generated based at least in part on comparing the composite measure of authentication assurance to a threshold.” and [Hitchcock, col. 2, lines 59-62] “authentication assurance may be determined from a composite or combination of both historical and runtime-provided inputs”, wherein the examiner interprets “historical…inputs” to be the same as server-side factors used to generate the server score and “runtime-provided inputs” to be the same as device/remote-side factors used to generate the remote score, because both are directed to combining multiple sources of information to determine assurance, and wherein the examiner interprets the determined “authentication assurance” to be the basis for determining whether an inherence authentication factor and a possession authentication factor have been established because both are directed to evaluating authentication sufficiency for the transaction request. Finally, the examiner interprets “comparing the composite measure…to a threshold” to be the same as “using a possession authentication factor threshold” because both are directed to applying one or more threshold criteria to determine whether the request should be granted.) Yang, Yuan, Hitchcock, and the instant application are analogous art because they are all directed to transaction authentication systems that determine whether to grant a transaction request. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 21 disclosed by Yang and Yuan to include the authentication assurance measuring approach disclosed by Hitchcock. One would be motivated to do so to effectively improve the assurance calculations, as suggested by Hitchcock ([Hitchcock, page 1] “A composite measure of authentication assurance is then determined from a combination of the user-level measure of authentication assurance and the account-level measure of authentication assurance”). Regarding claim 27, Yang and Yuan teach The method of claim 21, (see rejection for claim 21). Yang and Yuan do not teach wherein the third portion of the federated machine learning model includes a plurality of authentication factor thresholds, and wherein determining whether to grant the particular transaction request includes comparing the one or more remote scores to the authentication factor thresholds. Hitchcock teaches: wherein the third portion of the federated machine learning model includes a plurality of authentication factor thresholds, ([Hitchcock, col 5, lines 47-55] “The data stored in the data store includes … one or more machine learning models, 251, one or more transaction thresholds, and potentially other data.” and [Hitchcock, Fig. 4] “Determine Threshold Based at Least in Part on Transaction Type”, wherein the examiner interprets “Transaction Thresholds” and determining a threshold based on transaction type to be the same as “a plurality of authentication factor thresholds,” because both describe multiple threshold values/criteria that may be selected/used depending on the transaction context.) and wherein determining whether to grant the particular transaction request includes comparing the one or more remote scores to the authentication factor thresholds. ([Hitchcock, col 5, lines 5-10] “The authentication service 215 causes responses to authentication requests to be generated based at least in part on comparing a measure of authentication assurance to a threshold.” and [Hitchcock, col 20, line 51] “...authentication assurance meeting another threshold.”, wherein the examiner interprets “Transaction Thresholds” and determining a threshold based on transaction type to be the same as “a plurality of authentication factor thresholds,” because both describe multiple threshold values/criteria that may be selected/used depending on the transaction context.) Yang, Yuan, Hitchcock, and the instant application are analogous art because they are all directed to transaction evaluation systems that determine whether to grant a transaction request. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system claim 21 disclosed by Yang and Yuan to include the authorization thresholding technique disclosed by Hitchcock. One would be motivated to do so to efficiently enable server-side decision making, as suggested by Hitchcock ([Hitchcock, page 1] “A composite measure of authentication assurance is then determined from a combination of the user-level measure of authentication assurance and the account-level measure of authentication assurance. A response to the authentication request is generated based at least in part on comparing the composite measure of authentication assurance to a threshold”) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /VAN C MANG/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Aug 05, 2021
Application Filed
Nov 14, 2024
Non-Final Rejection — §101, §103
Jan 30, 2025
Interview Requested
Feb 13, 2025
Examiner Interview Summary
Feb 13, 2025
Applicant Interview (Telephonic)
Feb 19, 2025
Response Filed
Apr 29, 2025
Final Rejection — §101, §103
Jun 18, 2025
Interview Requested
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Request for Continued Examination
Jul 11, 2025
Response after Non-Final Action
Aug 01, 2025
Non-Final Rejection — §101, §103
Oct 21, 2025
Interview Requested
Oct 29, 2025
Examiner Interview Summary
Oct 29, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Response Filed
Jan 10, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month