Prosecution Insights
Last updated: April 19, 2026
Application No. 18/539,443

DATA COMPLIANCE SYSTEM AND METHOD

Final Rejection §103
Filed
Dec 14, 2023
Examiner
KIM, EUI H
Art Unit
2453
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
76 granted / 156 resolved
-9.3% vs TC avg
Strong +53% interview lift
Without
With
+52.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
28 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 156 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to the amendments filed on 10/14/2025. Claim 6 is cancelled. Claims 1, 4-5, 7-8, 11-12, 17, 19-20 are amended. Claims 1-5, 7-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks pg 7, filed 10/14/2025, with respect to Claim objection to Claim 5 have been fully considered and are persuasive. The Claim objection to Claim 5 has been withdrawn. Applicant’s arguments with respect to claim(s) the 35 USC 103 rejections to the claims filed on 10/14/2025 in Remarks pg. 7-11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-5, 7, 11, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1). Regarding Claim 1, Bagwell discloses A data processing system comprising (Bagwell: Fig. 1A the system including at least UPF-C 107): a processor; and a memory in communication with the processor, the memory comprising executable instructions that, when executed by the processor (Bagwell: para.0085-0086), cause the data processing system to perform functions of: retrieving data rules from a rule repository (Bagwell: para.0021 “SMF 109 and/or some other device or system”), the rule repository being a repository that stores one or more rules that are associated with at least one of storage or transfer of data by one or more devices in a computing environment (Bagwell: para.0021 “UPF-C 107 may obtain (at 108) policies associated with the packet from SMF 109 and/or some other device or system. For example, UPF-C 107 may be communicatively coupled to SMF 109 via a N4 interface. As noted above, the policies may include QoS policies, content filtering policies, and/or other suitable policies.” The SMF or other device/system stores a plurality of policies associated with the transfer of data, para.0029 “The configuration parameters, once configured at UPF-U 105, may cause UPF-U 105 to treat traffic in a manner consistent with the policies (provided at 114) associated with a particular flow with which the traffic is associated.” describes how the device uses the policies to affect data after policies are installed); generating a configuration file for configuring a Field Programmable Gate Array (FPGA) based on the data rules (Bagwell: para.0021 “UPF-C 107 may obtain (at 108) policies associated with the packet from SMF 109 and/or some other device or system. ” para.0023 “Additionally, UPF-C 107 may provide (at 114) an indication of the one or more policies (e.g., as obtained at 108) to UPF-U 105. For example, UPF-C 107 may communicate with UPF-U 105 via an application programming interface (“API”), a messaging protocol, and/or some other suitable communication pathway, in order to indicate that UPF-C 107 is providing policy information associated with the traffic provided (at 106) to UPF-C 107.” The UPF-107 uses the set of policies and generates a configuration file comprising the policies, and an indication regarding the policies (described in para.0024-0027), and provides this to the UPF 105-U in step 114.); and transmitting the configuration file to a FPGA configuration loader (Bagwell: Fig. 1A routing component 101, including UPF-U 105) for loading the configuration file onto the FPGA (Bagwell: para.0034 “ As discussed above, UPF-U 105 may have received (at 114) policy information from UPF-C 107, as well as one or more identifiers (e.g., 5-tuples and/or other suitable identifiers) with which such policies are associated.” Para.0028 “For example, UPF-U 105 … may include a Field Programmable Gate Array (“FPGA”). The FPGA may be configurable using the P4 programming language, another programming language, and/or some other suitable configuration technique. UPF-U 105 may perform (at 116) a configuration process (e.g., may configure the FPGA) based on the received policies.” The UPF-C 107 sends the configuration file comprising the policies and the identifiers to UPF-U 105 of routing component 101, in order to configure an FPGA.), wherein the FPGA utilizes the configuration file to route the data in the computing environment according to the data rules (Bagwell: para.0056 “For example, UPF-U 105 may include FPGA or other configurable resources that may be configured (e.g., based on the P4 programming language and/or other suitable parameters) to implement the received policies for traffic associated with a given flow.” para.0029 “The configuration parameters, once configured at UPF-U 105, may cause UPF-U 105 to treat traffic in a manner consistent with the policies (provided at 114) associated with a particular flow with which the traffic is associated. For example, UPF-U 105 may perform QoS-related traffic treatment, buffering, traffic duplication, access control, lawful interception, performance management counter collection (e.g., to track an amount of traffic associated with a given flow), GTP termination (e.g., in lieu of UPF-C 107, for GTP traffic that indicates UPF-C 107 as an endpoint), redirection, gating, steering, and/or other suitable treatment as indicated in the received policies.” para.0034 “As discussed above, UPF-U 105 may have received (at 114) policy information from UPF-C 107, as well as one or more identifiers (e.g., 5-tuples and/or other suitable identifiers) with which such policies are associated. UPF-U 105 may accordingly apply (at 126) the applicable policies to the traffic, such as QoS treatment and/or other suitable treatment based on the policies.” Once configured with the policy information, the FPGA implements the policies in the computing environment on received flow, based on the configuration file received from UPF-C, the traffic is routed, i.e. redirection, gating steering, based on the policies.). However Bagwell does not explicitly disclose retrieving metadata about data flow in the computing environment from a policy governor; retrieving data classification information of the data used by one or more services provided by the computing environment; retrieving network topography data of the computing environment; generating a configuration file for configuring a Field Programmable Gate Array (FPGA) based on the metadata, the network topography data, and the data classification information of the data used by the one or more services provided by the computing environment, the computing environment guided by the network topography data to include a minimum set of the data rules to handle the storage or the transfer of the data by the one or more services; and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to the minimum set of the data rules, the metadata, and the data classification information. Ding discloses retrieving metadata about data flow in the computing environment from a policy governor (Ding: Fig. 1, para.0055 “The service flows 106 are extracted by the network flow characterizer 102.” para.0060 “The network security rules generator 114 receives the extracted and characterized groups of network flows and generates network security rules based on those network flows received. ” network flow characterizer takes a sample of network flows, and classifies and characterizes the flows into groups. Rules generator 114 takes the slows and generates rules regarding the flows for implementation at a network device, para.0065); generating a configuration file for configuring a network device based on the metadata (Ding: para.0065 “The configuration files generator 124 then transform the network security rules into configuration files (computer-readable memory structures comprising machine controls), which may then be sent to a device(s), such as a router or firewall, to implement the network security rules generated by the system data flow diagram 100. The network security rules may also be transformed into a security policy.” The metadata about the data flow retrieved in para.0055, para.0060 above are consolidated into configuration files to be implemented in network devices.), wherein the network device utilizes the configuration file to route the data in the computing environment according to the metadata (Ding: para.0065 “The configuration files generator 124 then transform the network security rules into configuration files (computer-readable memory structures comprising machine controls), which may then be sent to a device(s), such as a router or firewall, to implement the network security rules generated by the system data flow diagram 100.” para.0044 “Another purpose of the present disclosure is to automate the generation of firewall, or filtering, controls to filter network traffic. In one exemplary embodiment, filtering is a binary operation (i.e., on-off, let traffic through or not) and may also be generalized to be rate limiting (i.e., only a certain amount of certain types of traffic can be allowed through in any specific period, and if the traffic exceeds that limit it is dropped or filtered out).” Traffic is routed by based on the generated network security rules, i.e. the rules generated based on the metadata.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell with Ding in order to incorporate retrieving metadata about data flow in the computing environment from a policy governor, generating a configuration file for configuring a network device based on the metadata, wherein the network device utilizes the configuration file to route the data in the computing environment according to the metadata, and apply this process to the FPGA of Bagwell. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved generation of network security rules with fewer errors (Ding: para.0002). However Bagwell-Ding does not explicitly disclose retrieving data classification information of the data used by one or more services provided by the computing environment; retrieving network topography data of the computing environment; generating a configuration file for configuring a Field Programmable Gate Array (FPGA) based on the network topography data, and the data classification information of the data used by the one or more services provided by the computing environment, the computing environment guided by the network topography data to include a minimum set of the data rules to handle the storage or the transfer of the data by the one or more services; and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to the minimum set of the data rules, and the data classification information. Threefoot discloses retrieving data classification information of the data used by one or more services provided by the computing environment (Threefoot: para.0056 “Data collector 304 may also collect network data received by the data collection components, and store the collected data in network information database 312.” Para.0070 “The traffic data provided to network information database 312 by data collector 304 may include (for each node (e.g., router) from which data is collected) traffic data for different Quality-of-Service (QoS) classes or differentiated service code point (DSCP) markings.” Para.0071 “ Video/Priority Data (High) QoS 606-1, Video/`Priority Data (Low) QoS 608-1,” para.0034“Topology manager 110 may derive routing rules (also referred to as a “routing table”) based on network state variables, a particular PIP network 104, network data, and routes (i.e., paths from a user device to a cloud 102), and ranking policies. The routing rules may specify, for a particular set of network state variables and/or network data (e.g., a network address of a domain name system (DNS), a network address of a device that requested the cloud service, the time of the request, bandwidths available to clouds 102, availability of a particular service, health status of clouds 102, load conditions of clouds 102 for a type of service, etc.), a list clouds 102 or paths that may provide the optimum service (e.g., the fastest download time).” Fig. 6, the data collected and stored in database 312 include classification of data used by the services, such as video business etc. including its priority, type of service, and any of the classification information in para.0034.); retrieving network topography data of the computing environment (Threefoot: para.0025 “On PIP network-by-PIP network basis, topology manager 110 collects network service topology information from clouds 102, network topology information from PIP network 104, and policies from administration device 116. Based on the received information, topology manager 110 generates routing rules or a routing table that specify optimum paths over which traffic flow from/to devices in PIPS 104 to/from clouds 102.” Network topology information is obtained to generate routing rules); generating a configuration file for configuring a Field Programmable Gate Array (FPGA) (Threefoot: para.0043-0044 “FIG. 2 is a block diagram of exemplary components of a network device 200. Network device 200 may correspond to any of the devices illustrated in network 100…a Field Programmable Gate Array (FPGA)”) based on the network topography data (Threefoot: para.0025 “On PIP network-by-PIP network basis, topology manager 110 collects network service topology information from clouds 102, network topology information from PIP network 104, and policies from administration device 116. Based on the received information, topology manager 110 generates routing rules or a routing table that specify optimum paths over which traffic flow from/to devices in PIPS 104 to/from clouds 102.” Network topology information is obtained to generate routing rules), and the data classification information of the data used by the one or more services provided by the computing environment (Threefoot: para.0034 “Topology manager 110 may derive routing rules (also referred to as a “routing table”) based on network state variables, a particular PIP network 104, network data, and routes (i.e., paths from a user device to a cloud 102), and ranking policies. The routing rules may specify, for a particular set of network state variables and/or network data (e.g., a network address of a domain name system (DNS), a network address of a device that requested the cloud service, the time of the request, bandwidths available to clouds 102, availability of a particular service, health status of clouds 102, load conditions of clouds 102 for a type of service, etc.), a list clouds 102 or paths that may provide the optimum service (e.g., the fastest download time).” ), the computing environment guided by the network topography data to include the data rules to handle the storage or the transfer of the data by the one or more services (Threefoot: para.0025 “On PIP network-by-PIP network basis, topology manager 110 collects network service topology information from clouds 102, network topology information from PIP network 104, and policies from administration device 116. Based on the received information, topology manager 110 generates routing rules or a routing table that specify optimum paths over which traffic flow from/to devices in PIPS 104 to/from clouds 102.” para.0034“Topology manager 110 may derive routing rules (also referred to as a “routing table”) based on network state variables, a particular PIP network 104, network data, and routes (i.e., paths from a user device to a cloud 102), and ranking policies. The routing rules may specify, for a particular set of network state variables and/or network data (e.g., a network address of a domain name system (DNS), a network address of a device that requested the cloud service, the time of the request, bandwidths available to clouds 102, availability of a particular service, health status of clouds 102, load conditions of clouds 102 for a type of service, etc.), a list clouds 102 or paths that may provide the optimum service (e.g., the fastest download time).”Network topology information is obtained to generate routing rules); and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to and the data classification information (Threefoot: para.0129-0130 “As shown, topology manager 110 may receive policies and/or rules 1620 from administration device. Furthermore, based on the received policies/rules 1620, topology manager 110 may generate routing rules/table 1622 and forward routing rules/table 1622 to request router 112. When request router 112 receives a request for a path 1630 to cloud 102 from a user device 114, request router 112 may provide a redirection address (e.g., IP address, URL, URL, etc.) 1632 to user device 114. Based on redirection address 1632, user device 104 may send a request 1634 for service to cloud 102.” Fiog. 16 1622-1632 the request router implements the rules obtained from the topology manager, i.e. the configuration file, according to the data classification information used to generate the rules as shown above.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding with Threefoot in order to incorporate retrieving data classification information of the data used by one or more services provided by the computing environment, retrieving network topography data of the computing environment, generating a configuration file for configuring a Field Programmable Gate Array (FPGA) based on the network topography data and the data classification information of the data used by the one or more services provided by the computing environment, the computing environment guided by the network topography data to include the data rules to handle the storage or the transfer of the data by the one or more services, and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to and the data classification information. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of optimizing network routing (Threefoot: abstract, para.0023). However Bagwell-Ding-Threefoot does not explicitly disclose the computing environment guided by the network topography data to include a minimum set of the data rules to handle the storage or the transfer of the data by the one or more services; and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to the minimum set of the data rules. Zhao discloses the computing environment guided by the network topography data to include a minimum set of the data rules to handle the storage or the transfer of the data by the one or more services (Zhao: para.0066 “In addition, the electronic device may access the Internet through the cellular network, and continue to execute a previous service through the wireless local area network, for example, continue to execute the service of remote printing.” para.0112 “In the embodiments, the electronic device may adjust the routing table based on the change of a network status, mainly adjusting the routing rule in the routing table based on the change of the network status. An adjustment manner is as follows. When a connection to the wireless local area network is broken or the Internet can be accessed through the wireless local area network, the routing rule of the wireless local area network is deleted. If the Internet cannot be accessed through the wireless local area network, and the user enables the network parallel control setting, the electronic device may continue to add the routing rule of the wireless local area network to the routing table, so that all routing rules in the routing table match currently available networks, reducing routing rules that are not associated with the current networks in the routing table, and reducing memory usage of the routing table. For example, if a connection to the wireless local area network is broken, that is, the currently available networks of the electronic device do not include the wireless local area network, the routing rule of the wireless local area network is an irrelevant routing rule, and the electronic device can delete the routing rule, to reduce a quantity of the routing rules in the routing table, thereby reducing memory usage of the routing table.” Whenever the network topology changes, i.e. links are no longer available in the computing environment, the number of rules are changed by removing rules that no longer apply or adding a rule that applies, thereby always maintaining a minimum number of rules that match the number of available paths); and wherein the FPGA (Zhao: para.0206 FPGA) utilizes the configuration file to route the data in the computing environment according to the minimum set of the data rules (Zhao: para.0009 “ The transmitting, in the first network and a second network, to-be-transmitted data through a network that matches the to-be-transmitted data includes: transmitting, in the first network and the second network based on a network parameter of the to-be-transmitted data and the routing rule of the first network, the to-be-transmitted data through the network that matches the to-be-transmitted data.” Based on the data to be transmitted matching a routing rule, the data is routed.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot with that of Zhao in order to incorporate the computing environment guided by the network topography data to include a minimum set of the data rules to handle the storage or the transfer of the data by the one or more services; and wherein the FPGA utilizes the configuration file to route the data in the computing environment according to the minimum set of the data rules.\ One of ordinary skill in the art would have been motivated to combine because of the expected benefit of reduced memory usage of the routing table in storage of the device (Zhao: para.0112). Regarding Claim 4, Bagwell-Ding-Threefoot discloses claim 1 as set forth above. However Bagwell-Ding does not explicitly disclose wherein the data classification information of the data used by the one or more services provided by the computing environment is extracted from one or more data sources associated with the one or more services via a Single Source of Truth (SSOT) extracting engine. Threefoot discloses wherein the data classification information of the data used by the one or more services provided by the computing environment is extracted from one or more data sources associated with the one or more services via a Single Source of Truth (SSOT) extracting engine (Threefoot: data collector 304 para.0056 “Data collector 304 may also collect network data received by the data collection components, and store the collected data in network information database 312.” Para.0070 “The traffic data provided to network information database 312 by data collector 304 may include (for each node (e.g., router) from which data is collected) traffic data for different Quality-of-Service (QoS) classes or differentiated service code point (DSCP) markings.” Para.0071 “ Video/Priority Data (High) QoS 606-1, Video/`Priority Data (Low) QoS 608-1,” Fig. 6, the data collected and stored in database 312 include classification of data used by the services, such as video business etc. including its priority, by the data collector 304, a Single Source of truth extracting engine, from various srouces such as data collection components regarding service metrics. Examiner notes: a Single source of truth under broadest reasonable interpretation refers to the concept of all data for a system being stored in a single location, therefore PCTM 108 is a single source of truth, and the data collector 304 is a single source of truth extracting engine.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding with Threefoot in order to incorporate wherein the data classification information of the data used by the one or more services provided by the computing environment is extracted from one or more data sources associated with the one or more services via a Single Source of Truth (SSOT) extracting engine. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of optimizing network routing (Threefoot: abstract, para.0023). Regarding Claim 5, Bagwell-Ding-Threefoot discloses claim 4 as set forth above. However Bagwell-Ding does not explicitly disclose wherein the data classification information extracted via the SSOT extracting engine is used to store a SSOT document for the computing environment in a data store. Threefoot discloses wherein the data classification information extracted via the SSOT extracting engine is used to store a SSOT document for the computing environment in a data store (Threefoot: data collector 304 para.0056 “Data collector 304 may also collect network data received by the data collection components, and store the collected data in network information database 312.” Para.0070 “The traffic data provided to network information database 312 by data collector 304 may include (for each node (e.g., router) from which data is collected) traffic data for different Quality-of-Service (QoS) classes or differentiated service code point (DSCP) markings.” Para.0071 “ Video/Priority Data (High) QoS 606-1, Video/`Priority Data (Low) QoS 608-1,” para.0057 “In some implementations, data collector 304 may invoke APIs made available by cloud service providers to determine or identify (for devices in clouds 102) CPUs, memory, server/storage throughout each server/storage farm in the cloud 102, etc. Data collector 304 may store such data in cloud database 310.” Fig. 6, the information collected by the data collector 304, is used to generate documents that represent qos information for services such as in Fig. 6 or Fig. 11 corresponding to para.0057, para.0099-0104 explains the information for each service. Stored in database 312 or 310, each of which can be data stores.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding with Threefoot in order to incorporate wherein the data classification information extracted via the SSOT extracting engine is used to store a SSOT document for the computing environment in a data store. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of optimizing network routing (Threefoot: abstract, para.0023). Regarding Claim 7, Bagwell-Ding-Threefoot discloses claim 1 as set forth above. However Bagwell-Ding does not explicitly disclose wherein the network topography data of the computing environment is retrieved from the FPGA configuration loader. Threefoot discloses wherein the network topography data of the computing environment is retrieved from a network router (Threefoot: para.0128 “FIG. 16 is a diagram illustrating an exemplary flow of messages between network devices/elements of FIG. 1B. As shown, topology manager 110 may receive network topology information 1612, network performance data 1614, cloud information 1616 (e.g., service or cloud topology information, cloud device information, etc.), and cloud or service performance data 1618 from network routers 1602 and clouds 102. ” para.0025 “On PIP network-by-PIP network basis, topology manager 110 collects network service topology information from clouds 102, network topology information from PIP network 104, and policies from administration device 116. Based on the received information, topology manager 110 generates routing rules or a routing table that specify optimum paths over which traffic flow from/to devices in PIPS 104 to/from clouds 102.” Network topology information is obtained from network routers to generate routing rules). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding with Threefoot in order to incorporate wherein the network topography data of the computing environment is retrieved from a network router, and apply this concept to the FPGA configuration loader of Bagwell, which is a router, i.e. routing component 101. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of optimizing network routing (Threefoot: abstract, para.0023). Regarding Claim 11, it teaches all of the same steps as claim 1 but in a method for ensuring data compliance in a computing environment comprising: (Bagwell: Fig. 1A the method comprising the steps of at least Fig. 1A, para.0013) Therefore the supporting rationale for the rejection to claims 1 apply equally as well to that of claim 11. Regarding Claims 17-18, they teach all of the same steps as claims 1 and 10, but in A non-transitory computer readable medium on which are stored instructions that when executed cause a programmable device to perform functions of: (Bagwell: para.0089). Therefore the supporting rationale for the rejection to claims 1 and 10 apply equally as well to that of claims 17-18. Regarding Claim 19, Bagwell-Ding-Threefoot-Zhao discloses claim 17 as set forth above. Bagwell further discloses wherein the FPGA is included in a network device of the computing environment, the network device being used to route the data in the computing environment (Bagwell: para.0029 “The configuration parameters, once configured at UPF-U 105, may cause UPF-U 105 to treat traffic in a manner consistent with the policies (provided at 114) associated with a particular flow with which the traffic is associated. For example, UPF-U 105 may perform QoS-related traffic treatment, buffering, traffic duplication, access control, lawful interception, performance management counter collection (e.g., to track an amount of traffic associated with a given flow), GTP termination (e.g., in lieu of UPF-C 107, for GTP traffic that indicates UPF-C 107 as an endpoint), redirection, gating, steering, and/or other suitable treatment as indicated in the received policies.” para.0034 “As discussed above, UPF-U 105 may have received (at 114) policy information from UPF-C 107, as well as one or more identifiers (e.g., 5-tuples and/or other suitable identifiers) with which such policies are associated. UPF-U 105 may accordingly apply (at 126) the applicable policies to the traffic, such as QoS treatment and/or other suitable treatment based on the policies.” Based on the configuration file received from UPF-C, the traffic is routed, i.e. redirection, gating steering, based on the policies by the routing component 101, which comprises the UPF-U with the FPGA. For example in the computing environment in Fig. 3.). Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Young et al. (hereinafter Young, US 2024/0163724 A1). Regarding Claim 2, Bagwell-Ding-Threefoot-Zhao discloses claim 1 as set forth above. While Bagwell-Ding-Threefoot-Zhao discloses the process of obtaining rules from a rule repository and generating configuration files, it does not explicitly disclose wherein the data rules are extracted from the rule repository by a rule extracting engine and retrieved from the rule extracting engine by a FPGA configuration generator. Young discloses wherein the data rules are extracted from the rule repository (Young: Fig. 5 Policy DB 510, or 518 of policy DB) by a rule extracting engine (Young: Fig. 5 Policy engine 508, para.0070 “Furthermore, configuration generator 506 may query policy engine 508 to retrieve policy rules applicable to the flow, from design rules DB 518 and flow properties DB 520 (block 1008). Policy engine 510 may retrieve and provide the requested rules to configuration generator 506.” Policy engine 508 is the rule extracting engine, and it obtains rules from policy DB 518.) and retrieved from the rule extracting engine by a FPGA configuration generator (Young: Fig. 5 Configuration Generator 506, para.0076 FPGA, para.0070 “Furthermore, configuration generator 506 may query policy engine 508 to retrieve policy rules applicable to the flow, from design rules DB 518 and flow properties DB 520 (block 1008). Policy engine 510 may retrieve and provide the requested rules to configuration generator 506.” The configuration generator 506 obtains the policies that were stored on Policy DB 510, from Policy Engine 508). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing to combine Bagwell-Ding-Threefoot-Zhao with that of Young in order to incorporate wherein the data rules are extracted from the rule repository by a rule extracting engine and retrieved from the rule extracting engine by a FPGA configuration generator. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of reducing the load at the configuration generator by having a dedicated engine perform the storing and pushing of policies (Young: para.0055-0056). Regarding Claim 3, Bagwell-Ding-Threefoot-Zhao discloses claim 1 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the metadata are extracted from the policy governor by a policy extracting engine and retrieved from the policy extracting engine by a FPGA configuration generator. Young discloses wherein the metadata are extracted from the policy governor (Young: Fig. 5 Policy DB 510 or the DB 520 of DB 510.) by a policy extracting engine (Young: Fig. 5 Policy engine 508, para.0070 “Furthermore, configuration generator 506 may query policy engine 508 to retrieve policy rules applicable to the flow, from design rules DB 518 and flow properties DB 520 (block 1008). Policy engine 510 may retrieve and provide the requested rules to configuration generator 506.” Policy engine 508 is the rule extracting engine, and it obtains metadata about the flow from DB 510/ DB 520..) and retrieved from the policy extracting engine by a FPGA configuration generator (Young: Fig. 5 Configuration Generator 506, para.0076 FPGA, para.0070 “Furthermore, configuration generator 506 may query policy engine 508 to retrieve policy rules applicable to the flow, from design rules DB 518 and flow properties DB 520 (block 1008). Policy engine 510 may retrieve and provide the requested rules to configuration generator 506.” The configuration generator 506 obtains the metadata regarding at least flow property metadata obtained by the policy extracting engine Fig. 5.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing to combine Bagwell-Ding-Threefoot-Zhao with that of Young in order to incorporate wherein the metadata are extracted from the policy governor by a policy extracting engine and retrieved from the policy extracting engine by a FPGA configuration generator. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of reducing the load at the configuration generator by having a dedicated engine perform the storing and pushing of policies (Young: para.0055-0056). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Timmons (US 2022/0200915 A1). Regarding Claim 8, Bagwell-Ding-Threefoot-Zhao discloses claim 1 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the FPGA configuration loader receives the network topography data from at least one of a network graph service, a network state service, and a control plane. Timmons discloses wherein the router receives the network topography data from at least one of a network graph service, a network state service, and a control plane (Timmons: para.0050 “In some examples, routers 110 operate according to a publish-subscribe model. According to this model, each router 110 publishes, to central repository 120, one or more changes in services reachable from the router 110 and/or one or more changes in a network topology for reaching the services from the router 110. Other routers 110 may subscribe to receive publications for the router 110 from central repository 120. In response to receiving changes in the service and topology state information for a router 110, central repository 120 stores the changes in the service and topology state information for the router 110. Further, central repository 120 publishes the changes in the service and topology state information for the router 110 to other routers 110 that are subscribed to receive updates and/or changes for the router 110.” The central repository obtains topology changes and publishes these changes to all other routers. The central repository is at least a network graph service or a network state services as it maintains the topology of the network and provides these as a service to the routers.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao with Timmons in order to incorporate wherein the router the network topography data from at least one of a network graph service, a network state service, and a control plane, and apply this concept to the FPGA configuration loader of Bagwell implemented as a router. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving performance of a computer network with better control of routing (Timmons: para.0008). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Boyapalle et al. (hereinafter Boyapalle, US 11,979,327 B1). Regarding Claim 9, Bagwell-Ding-Threefoot-Zhao discloses claim 1 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the FPGA configuration loader utilizes an artificial intelligence model to determine whether a new configuration file should be generated for the FPGA. Boyapelle discloses wherein the Orchestrator utilizes an artificial intelligence model to determine whether a new configuration file should be generated for the HIS (Boyapalle: col. 8 lines 46-58 “OS agent or service 302 may collect telemetry data and transmit the telemetry data to cloud orchestrator 401 (e.g., one or remote services 206A-N). Cloud orchestrator 401 may be configured to execute one or more ML/AI models upon the telemetry data to determine whether changes to traffic routing provided by the current policy can be improved to achieve a particular Key Performance Indicator (KPI), such as a user experience metric, a productivity metric, latency, throughput, etc.” the orchestrator 401 determines using an AI model that routing policy should be changed to achieve a traffic metric.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Bagwell-Ding-Threefoot-Zhao with Boyapelle in order to incorporate wherein the Orchestrator utilizes an artificial intelligence model to determine whether a new configuration file should be generated for the HIS, and apply this concept to the FPGA configuration loader and the FPGA of Bagwell. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved traffic routing (Boyapelle: col. 8 lines 46-58). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Turan et al. (hereinafter Turan, US 2021/0110099 A1). Regarding Claim 10, Bagwell-Ding-Threefoot-Zhao discloses claim 1 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the FPGA configuration loader utilizes a hardware proxy to load the configuration file onto the FPGA. Turan discloses wherein the FPGA configuration loader (Turan: Fig. 2 Configuration data loading equipment 54) utilizes a hardware proxy (Turan: Fig. 2 configuration device 40) to load the configuration file (Turan: Fig. 2 configuration data) onto the FPGA (Turan: fpga 10, Fig. 6, Fig. 2. Para.0035 “As shown in FIG. 2, the configuration data produced by a logic design system 56 may be provided to equipment 54 over a path such as path 58. The equipment 54 provides the configuration data to device 40, so that device 40 can later provide this configuration data to the programmable logic device 10 over path 42.” The configuration loader uses hardware proxy 40 to configure the FPGA 10). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao in order to incorporate wherein the FPGA configuration loader utilizes a hardware proxy to load the configuration file onto the FPGA. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved security that would come with dedicated read-only memory being used for configuration of the FPGA (Turan: para.0032). Claim(s) 12-13, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Dickinson et al. (hereinafter Dickenson, US 10,862,796 B1). Regarding Claim 12, Bagwell-Ding-Threefoot-Zhao discloses claim 11 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose providing at least one of the data rules, the metadata, the network topography data, the data classification information and one or more auth configuration files associated with the one or more services to a report generating engine, wherein the report generating engine utilizes at least one of the data rules, the metadata, the network topography data, the data classification information and authorization configuration data associated with the one or more services to the report generating engine to generate one or more data compliance documents. Dickinson discloses providing at least one of the data rules, the metadata, the network topography data, the data classification information and one or more auth configuration files associated with the one or more services to a report generating engine (Dickinson: col. 18 lines 36-60 “As indicated at 1130, in some embodiments, the flow policy service may obtain and aggregate flow logs to generate flow reports for the client. The network appliances attached to or within a client's virtual network may generate flow logs based on the client packets processed at the network appliances. In some embodiments, network devices (e.g., edge routers, host devices, etc.) that apply flow policy rules may also generate flow logs. The flow logs may, for example, be collected and aggregated by the flow policy service to generate flow reports that may be used by the client to confirm that traffic to, from, or within their virtual network is flowing through the correct network appliances according to the flow policy rules.” The flow policy service is the report generating engine, and it collects flow metadata in order to generate reports.), wherein the report generating engine utilizes at least one of the data rules, the metadata, the network topography data, the data classification information and authorization configuration data associated with the one or more services to the report generating engine to generate one or more data compliance documents (Dickinson: col. 18 lines 36-60 “In some embodiments, network devices (e.g., edge routers, host devices, etc.) that apply flow policy rules may also generate flow logs. The flow logs may, for example, be collected and aggregated by the flow policy service to generate flow reports that may be used by the client to confirm that traffic to, from, or within their virtual network is flowing through the correct network appliances according to the flow policy rules.” The flow reports, i.e. metadata, is aggregated into a flow report, the data compliance document.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot with Dickinson in order to incorporate providing at least one of the data rules, the metadata, the network topography data, the data classification information and one or more auth configuration files associated with the one or more services to a report generating engine, wherein the report generating engine utilizes at least one of the data rules, the metadata, the network topography data, the data classification information and authorization configuration data associated with the one or more services to the report generating engine to generate one or more data compliance documents. One of ordinary skill would have been motivated to combine because of the expected benefit of improved user experience by being able to personally confirm compliance with rules in the system (Dickinson: col. 18 lines 36-60). Regarding Claim 13, Bagwell-Ding-Threefoot-Zhao-Dickinson discloses claim 12 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the report generating engine is a software service or software application for generating data compliance documents. Dickinson discloses wherein the report generating engine is a software service or software application for generating data compliance documents (Dickinson: col. 18 lines 36-60 “In some embodiments, network devices (e.g., edge routers, host devices, etc.) that apply flow policy rules may also generate flow logs. The flow logs may, for example, be collected and aggregated by the flow policy service to generate flow reports that may be used by the client to confirm that traffic to, from, or within their virtual network is flowing through the correct network appliances according to the flow policy rules.” The flow policy service is the report generating engine, and considered to be both a software service and a software application). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao with Dickinson in order to incorporate wherein the report generating engine is a software service or software application for generating data compliance documents. One of ordinary skill would have been motivated to combine because of the expected benefit of improved user experience by being able to personally confirm compliance with rules in the system (Dickinson: col. 18 lines 36-60). Regarding Claim 16, Bagwell-Ding-Threefoot-Zhao-Dickinson discloses claim 12 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the report generating engine transmits the one or more data compliance documents to a reporting tool. Dickinson discloses wherein the report generating engine transmits the one or more data compliance documents to a reporting tool (Dickinson: col. 10 lines 15-37 “Aggregation 236 engine may implement, but is not limited to, logic for receiving flow logs from appliances 214 and/or network devices 208 via API 232B, and logic for aggregating and formatting the flow logs to provide flow reports 239 to the interface 284 on client device 282 via API 232A.” the interface 284 is the reporting tool, and the aggregation engine of the flow policy service that produces the flow report provides the flow report to the interface.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao with Dickinson in order to incorporate wherein the report generating engine transmits the one or more data compliance documents to a reporting tool. One of ordinary skill would have been motivated to combine because of the expected benefit of improved user experience by being able to personally confirm compliance with rules in the system (Dickinson: col. 18 lines 36-60). Claim(s) 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Dickinson et al. (hereinafter Dickenson, US 10,862,796 B1) further in view of Yum et al. (hereinafter Yum, US 2015/0067171 A1). Regarding Claim 14, Bagwell-Ding-Threefoot-Zhao-Dickinson discloses claim 12 as set forth above. However Bagwell-Ding-Threefoot-Zhao-Dickinson does not explicitly disclose wherein the report generating engine generates a user selected type of data compliance document. Yum discloses wherein the report generating engine generates a user selected type of data compliance document (Yum: para.0096 “ Once the data selection criteria are defined, the user may choose a pre-defined template to present the information. The report may be viewed online, saved as a document, and/or transferred out via different protocols such as email or secure shell (SSH) file transfer protocol (SFTP), etc. ” the template/type of document to be generated is selected by the user.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao-Dickinson with Yum in order to incorporate wherein the report generating engine generates a user selected type of data compliance document. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved user experience by being able to customize the report to the users liking (Yum: para.0096). Regarding Claim 15 Bagwell-Ding-Threefoot-Zhao-Dickinson discloses claim 12 as set forth above. However Bagwell-Ding-Threefoot-Zhao-Dickinson does not explicitly disclose wherein the report generating engine enables a user to select one or more types of data points to be included in a data compliance document. Yum discloses wherein the report generating engine enables a user to select one or more types of data points to be included in a data compliance document (Yum: para.0096 “This module may be responsible for generating different views of information generated by the cloud service brokering facility 204, which views may be provided to the user through the interface facility 202. The user may extract the part of information of interest by specifying filtering criteria for each data set. The filtering criteria may include time, duration, resource type, resource location, users, selected data fields, etc. The filtering criteria may be saved for reuse. Once the data selection criteria are defined, the user may choose a pre-defined template to present the information. The report may be viewed online, saved as a document, and/or transferred out via different protocols such as email or secure shell (SSH) file transfer protocol (SFTP), etc.” The user is able to select types of data to be included in the report.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bagwell-Ding-Threefoot-Zhao-Dickinson with Yum in order to incorporate wherein the report generating engine enables a user to select one or more types of data points to be included in a data compliance document. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved user experience by being able to customize the report to the users liking (Yum: para.0096). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagwell et al. (hereinafter Bagwell, US 2022/0021605 A1) in view of Ding et al. (hereinafter Ding, US 2019/0036882 A1) in view of Threefoot et al. (hereinafter Threefoot, US 2015/0052247 A1) in view of Zhao et al. (hereinafter Zhao, US 2024/0389010 A1) in view of Carney et al. (hereinafter Carney, US 2012/0167160 A1). Regarding Claim 20, Bagwell-Ding-Threefoot-Zhao discloses claim 17 as set forth above. However Bagwell-Ding-Threefoot-Zhao does not explicitly disclose wherein the computing environment includes a plurality of FPGAs and the FPGA configuration loader determines the configuration file to load onto each of the plurality of FPGAs. Carney discloses wherein the computing environment includes a plurality of FPGAs (Carney: para.0024 “The router policy server may manage a routing table for a network, may determine which routers are to receive routing information based on a policy associated with each router, and may provide the routing information to the other routers based on the determined policy.” The environment has a plurality of routers, each router operated via FPGA, para.0041-0042 “Device 300 may correspond to router policy server 115 or to control unit 240 of trusted router 125…. Processor 320 may include… field programmable gate arrays (FPGAs)”) and the FPGA configuration loader determines the configuration file to load onto each of the plurality of FPGAs (Carney: para.0024 “The router policy server may manage a routing table for a network, may determine which routers are to receive routing information based on a policy associated with each router, and may provide the routing information to the other routers based on the determined policy.” It is determined which router, each of which run on an FPGA, receive which set of routing information). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Bagwell-Ding-Threefoot-Zhao with Carney in order to incorporate wherein the computing environment includes a plurality of FPGAs and the FPGA configuration loader determines the configuration file to load onto each of the plurality of FPGAs. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved security by only providing needed routing for each FPGA (Carney: para.0024). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Brannon et al. US 2021/0406398 A1 see para.0690 that sets data transfer rules based on rules for areas such as the EU or organization. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUI H KIM whose telephone number is (571)272-8133. The examiner can normally be reached 7:30-5 M-R, M-F alternating. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal B Divecha can be reached at 5712725863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EUI H KIM/ Examiner, Art Unit 2453 /KAMAL B DIVECHA/ Supervisory Patent Examiner, Art Unit 2453
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Jul 08, 2025
Non-Final Rejection — §103
Sep 15, 2025
Interview Requested
Sep 23, 2025
Examiner Interview Summary
Sep 23, 2025
Applicant Interview (Telephonic)
Oct 14, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103
Mar 20, 2026
Interview Requested
Mar 27, 2026
Examiner Interview Summary
Mar 27, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12549457
CREATING DECENTRALIZED MULTI-PARTY TRACEABILITY OF SLA USING A BLOCKCHAIN
2y 5m to grant Granted Feb 10, 2026
Patent 12519859
DETERMINING DATA MIGRATION STRATEGY IN HETEROGENEOUS EDGE NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12506818
METHOD AND SYSTEM FOR TIME SENSITIVE PROCESSING OF TCP SEGMENTS INTO APPLICATION LAYER MESSAGES
2y 5m to grant Granted Dec 23, 2025
Patent 12483462
Cloud Network Failure Auto-Correlator
2y 5m to grant Granted Nov 25, 2025
Patent 12470606
SYSTEMS AND METHODS FOR SCHEDULING FEATURE ACTIVATION AND DEACTIVATION FOR COMMUNICATION DEVICES IN A MULTIPLE-DEVICE ACCESS ENVIRONMENT
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+52.9%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month