Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,444

SAFETY FUSE FOR MACHINE LEARNING TRUST MANAGEMENT IN INTERNET PROTOCOL NETWORKS

Final Rejection §101§103
Filed
Dec 28, 2023
Examiner
SHAUGHNESSY, AIDAN EDWARD
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Nokia Solutions and Networks Oy
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
3 granted / 8 resolved
-20.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
44 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
66.0%
+26.0% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments / Arguments Regarding the rejection(s) of claims under 35 USC 112(b) Applicant’s arguments, filed 11/17/2025, with respect to claims 19-20 have been fully considered and are persuasive. The rejection of claims 19-20 has been withdrawn. Regarding the 101 rejection, applicant's arguments, filed 11/17/2025, have been fully considered but they are not persuasive. Applicant argues that the claims are directed to "network security technology for protecting SDN infrastructure from compromised ML systems," "software component management to control which network security software processes network traffic," and "automated security response with real-time failover from compromised to trusted security components," and therefore are not directed to methods of organizing human activity. In response, it is noted that the underlying concept of the claims of monitoring a manager's performance characteristics, evaluating those characteristics against a threshold, and replacing the manager with another manager upon determining the first manager fails to meet expectations, is fundamentally a management and organizational concept that humans have performed for centuries. The fact that the "managers" in the present claims are software components (i.e., a machine learning trust manager and a deterministic trust manager) does not change the nature of the abstract idea. Furthermore, none of the language referenced by applicant appears in the claim language. Accordingly, the claims are considered to recite an abstract idea falling within the "certain methods of organizing human activity" category. Applicant argues that the amended claims recite "monitor, in near real-time, performance characteristics and usage characteristics of a machine learning trust manager," and that "it is not practical for a human mind to monitor and evaluate network characteristics at the speed and efficiency necessary to maintain a healthy network." Applicant further argues that "the human mind cannot observe and evaluate network packet flows at the volumes and speeds required by modern SDN environments." In response, it is noted that the performance by a computer of operations that previously were performed manually or mentally, albeit less efficiently, does not convert a known abstract idea into eligible subject matter. Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1277-78 (Fed. Cir. 2012). The core operations claimed of monitoring performance characteristics, evaluating against a threshold, determining whether to replace a component, are operations that humans have performed in management contexts. That a computer performs these operations faster, or "in near real-time," does not transform the abstract nature of the underlying concept. Further, the claim language "in near real-time" does not specify any particular timing constraints or packet level monitoring requirements that would necessarily require computer implementation. The limitations that applicant points to as being “not practically” performed in the human mind do not appear anywhere in the claim. Applicant argues that the claims provide "a technical solution to a technological problem" and "an improvement to another technology or technical field, namely at least the technological field of software-defined networking." Applicant further argues that "causing a network controller to deactivate a machine learning trust manager, and activate a deterministic trust manager in place of the machine learning trust manager, in response to determining that the machine learning trust manager fails to satisfy the performance threshold or displays evidence of misuse, is a significant improvement to the relevant technology." In response. Tenstreet, LLC v. Driverreach, LLC, No. 2020-1101 (Fed. Cir. Oct. 19, 2020) ("Even if the '575 patent provides advantages over manual collection of data, the patent claims no technological improvement beyond the use of a generic computer network."). Here, the claims recite no technological improvement beyond using a generic software-defined network controller to implement the abstract concept of monitoring, evaluating, and replacing an underperforming component. That the operations occur within a network environment does not remove the limitations from being abstract. Applicant hasn’t pointed to a technical problem being solved with a technical solution. Instead, applicant’s invention is directed to identifying a slow or untrustworthy decision-maker and replacing it with a better performing one. Furthermore, applicant hasn’t articulated any specific improvement to any technology. Therefore the 101 rejection is maintained. Regarding the 103 rejection, applicants arguments, filed 11/17/2025, have been fully considered and are not persuasive. Applicant argues that Bhalla fails to teach monitoring "usage characteristics" as distinct from performance characteristics. In response, Bhalla explicitly teaches behavioral monitoring that encompasses usage characteristics. Paragraph [0030] recites collection of "data telemetry 18" including "Operations, Administration, Maintenance, and Provisioning (OAM&P) data, Performance Monitoring (PM) data, alarms." Paragraph [0028] details extensive operational data including "bandwidth, throughput, latency, jitter, error rate, RX bytes/packets, TX bytes/packets" and "service and traffic layer data" this operational telemetry inherently reflects system usage patterns. Additionally, paragraph [0043] describes "ticketing or service desk integrations" with tracking of incident patterns, demonstrating monitoring of how the system is operationally utilized. Paragraph [0025] teaches monitoring "normal network operations" to "derive probability of anomalies," which requires analyzing usage patterns to establish baseline behavior. Applicant argues that Bhalla does not address the "evidence of misuse" prong of the claim's "OR" structure. In response, while the claim uses "OR" language requiring either performance threshold failure OR evidence of misuse (and the performance threshold failure was not argued), Bhalla still teaches “evidence of misuse”. Paragraph [0102] of Bhalla recites "Risks associated with pure data-driven and AI-driven systems include: ... 4) the possibility to break the system by injecting malicious input data." Paragraph [0109] of Bhalla further recites that "the safeguard module 202A can maintain the integrity of the input to the AI system... The closed-loop automation system can protect itself from malicious fake-data attacks by using multiple independent data collectors 204, 206 and data sources." Accordingly, Bhalla's safeguard module is expressly designed to detect and respond to malicious activity and misuse scenarios. Applicant argues that Bhalla does not teach "near real-time" monitoring of machine learning systems. In response, paragraph [0032] explicitly teaches that "response time, i.e., time to compute the probability of an outcome given input data, to be fast for identifying the optimal action to take." Paragraph [0107] recites that the safeguard module monitors "statistical uncertainties reported by the ML algorithm itself" and can "intercept and either modify or drop action requests from the AI system 20 before they go out to network elements." This real-time interception and modification of AI actions demonstrates the claimed near real-time monitoring capability. Applicant argues that Ren is "not trust-based, nor concerned with misuse/security" because "the terms 'trust' and 'truth' never appear a single time within the disclosure of Ren." This argument is not persuasive. Patent claim interpretation focuses on functional equivalence, not specific terminology. Ren extensively describes monitoring ML model performance [0006] and [0135] detecting when models produce incorrect outputs [0037], and implementing threshold-based failure detection([0113]. These functions are substantively identical to "trust management" regardless of the specific terms used. Therefore, the identified claim language is considered to be taught by the combined references, and the rejection is maintained. Further, since Applicant has not presented additional arguments concerning the dependent claims, their rejections are likewise maintained. DETAILED ACTION This is a reply to the application filed on 11/17/2025, in which, claims 1-20 are pending. Claims 1, 11, and 19 are independent. When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 is directed to an abstract idea without significantly more. The following limitation are directed to an abstract idea because they recite an abstract idea: A method for evaluating and managing performance of different types of managers (organizing human activity; this is fundamentally a management process for evaluating performance and replacing underperforming managers). Monitor in near real-time performance characteristics and usage characteristics of a manager (mental process; a human supervisor can observe and mentally track performance metrics of an employee or contractor). Evaluate the performance characteristics to determine whether the manager meets a performance threshold or displays evidence of misuse (mental process; a human can mentally compare observed performance against a standard or threshold). In response to determining that the manager fails to satisfy the performance threshold or evidence of misuse, deactivate the manager, and activate the manager in place of the manager (organizing human activity; this is essentially firing an underperforming employee and hiring a replacement who follows established procedures). Additional elements include: software-defined network controller, at least one processor, memory coupled to the at least one processor, the memory storing computer-executable instructions, machine learning trust manager, deterministic trust manager. These additional elements fail to integrate the abstract idea into a practical application because no improvement to a computer or technology is achieved. The claimed invention ends with merely replacing one type of manager with another type of manager. Further, these additional elements recite at a high level of generality (i.e. software-defined network controller, processor, memory, machine learning trust manager, deterministic trust manager) using computers as a tool to implement the abstract idea. Further, these additional elements are insignificant pre-solution activity. The additional elements alone, and in combination with the abstract idea, fail to arrive at significantly more than the abstract idea itself. As noted previously, no improvement to a computer or technology is achieved. The claimed invention ends with simply deactivating one component and activating another component. Further, these additional elements recite at a high level of generality (i.e. software-defined network controller, processor, memory, machine learning trust manager, deterministic trust manager) using computers as a tool to implement the abstract idea. Further, these additional elements are insignificant pre-solution activity. Claims 2-20 are rejected under similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla et al. (US 20200259700 A1, referred to as Bhalla), in view of Ren et al. (US 20240147267 A1, referred to as Ren). In reference to claim 1, A software-defined network controller comprising: at least one processor; memory coupled to the at least one processor, the memory storing computer-executable instructions; and wherein the at least one processor is configured to execute the computer-executable instructions (Bhalla: [0030], [0031], [0107], and [0132]-[0136] Provides for a controller for a network that includes a processor and memory with instructions for execution.) Monitor in near real-time performance characteristics and usage characteristics of a machine learning manager (Bhalla: [0019], [0105]-[0111] and [0129] Provides for monitoring of an AI/ML system by safeguard modules. Bhalla: [0030], [0028], [0043] and [0025] provides for analyzing usage patterns to establish baseline behavior. Bhalla: [0032] and [0107] Provides for real-time interception and modification of AI actions.) Evaluate the performance characteristics and usage characteristics to determine whether the machine learning manager satisfies a performance threshold or displays evidence of misuse (Bhalla: [0107], [0111] and [0120]-[0122] Provides for evaluating performance characteristics of ML algorithms against thresholds to determine if they are trustworthy. Bhalla: [0102] and [0109] Provides for a system designed to detect and respond to malicious activity and misuse scenarios.) In response to determining that the machine learning trust manager fails to satisfy the performance threshold or displays evidence of misuse, deactivate the machine learning trust manager, and activate a deterministic trust manager in place of the machine learning trust manager (Bhalla: [0111]-[0112] and [0120]-[0125] Provides for deactivating the AI-based system when it fails to meet performance thresholds and switching to a deterministic algorithm instead. Bhalla: [0102] and [0109] Provides for a system designed to detect and respond to malicious activity and misuse scenarios.) Bhalla teaches safeguard modules which function similarly to a "machine learning trust manager" by evaluating the reliability and trustworthiness of ML algorithm outputs but doesn't explicitly use the term "trust manager" or explicitly teach the monitoring of the “trust manager” itself. However, Ren teaches: Wherein the machiner learning manager is a trust manager and the monitoring of the trust manager itself (Ren: [0038] and [0135]-[0136] Provides for monitoring the status/performance of a machine learning model used in wireless communications.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla, which provides a software-defined network controller that monitors machine learning systems and switches to deterministic algorithms when performance thresholds are not met, with the teachings of Ren, which introduces specific monitoring of machine learning trust managers and their performance characteristics. One of ordinary skill in the art would recognize the ability to incorporate Ren's trust manager monitoring approach into Bhalla's system to create more specialized and targeted machine learning oversight. One of ordinary skill in the art would be motivated to make this modification in order to improve network security by specifically monitoring trust-related machine learning functions. In reference to claim 2, The software-defined network controller of claim 1, wherein the at least one processor is further configured to execute the computer-executable instructions to cause the software-defined network controller to: evaluate the performance characteristics and usage characteristics by determining whether decisions made by the machine learning trust manager are trustworthy (Bhalla: [0103], [0107]-[0112] and [0121] Provides for a system that evaluates whether decisions made by the ML system are trustworthy. The safeguard module explicitly examines the "statistical uncertainties" to determine if insights from the ML algorithm can be trusted.) In reference to claim 3, The software-defined network controller of claim 2, wherein the machine learning trust manager fails to satisfy the performance threshold if a threshold number of decisions made by the machine learning trust manager are determined to be untrustworthy (Bhalla: [0103]-[0107], [0110]-[0112] and [0126] Provides for a system that evaluates ML trustworthiness against performance thresholds.) In reference to claim 9, The software-defined network controller of claim 1, wherein the at least one processor is further configured to execute the computer-executable instructions to cause the software-defined network controller to:. evaluate performance and usage of a plurality of trust managers, including at least one machine learning trust manager and at least one deterministic trust manager; and control which of the plurality of trust managers is activated or deactivated based on the performance and usage of the plurality of trust managers. (Bhalla: [0103]-[0104], [0110]-[0112] and [0125]-[0129] Provides for a system that evaluates the performance of different types of trust managers and selects between them based on their reliability characteristics.) In reference to claim 10, The software-defined network controller of claim 1, wherein the at least one processor is further configured to execute the computer-executable instructions to cause the software-defined network controller to: perform an initial evaluation of the machine learning trust manager prior initially activating the machine learning trust manager (Ren: [0092]-[0097] Provides for comprehensive evaluation process that occurs before model activation, including validation and testing phases.) In reference to claim 11, A system comprising: a software-defined network controller including at least one processor, memory coupled to the at least one processor, the memory storing computer-executable instructions, and wherein the at least one processor is configured to execute the computer-executable instructions (Bhalla: [0030], [0031], [0107], and [0132]-[0136] Provides for a controller for a network that includes a processor and memory with instructions for execution.) selectively activate or deactivate a machine learning manager based on a result of a ranking-and-decision policy, wherein the result of the ranking-and-decision policy indicates, in near real-time, actual or suspected misuse of the machine learning manager (Bhalla: [0107], [0111-0112] and [0124-0125] Provides for a safeguard module's evaluation process which is functionally equivalent to a "ranking-and-decision policy".) Bhalla teaches safeguard modules which function similarly to a "machine learning trust manager" by evaluating the reliability and trustworthiness of ML algorithm outputs but doesn't explicitly use the term "trust manager" or explicitly teach the monitoring of the “trust manager” itself. However, Ren teaches: Wherein the machiner learning manager is a trust manager and the monitoring of the trust manager itself (Ren: [0038] and [0135]-[0136] Provides for monitoring the status/performance of a machine learning model used in wireless communications.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla, which provides a software-defined network controller that monitors machine learning systems and switches to deterministic algorithms when performance thresholds are not met, with the teachings of Ren, which introduces specific monitoring of machine learning trust managers and their performance characteristics. One of ordinary skill in the art would recognize the ability to incorporate Ren's trust manager monitoring approach into Bhalla's system to create more specialized and targeted machine learning oversight. One of ordinary skill in the art would be motivated to make this modification in order to improve network security by specifically monitoring trust-related machine learning functions. In reference to claim 12, The system of claim 11, wherein: the ranking-and-decision policy is based on evaluation criteria including one or more of a detection error rate, a runtime, fairness between clients of the system, or compliance with service level agreements (Ren: [0088], [0109]-[0114] and [0127] Provides for technical evaluation criteria.) In reference to claim 13, The system of claim 12, wherein: the evaluation criteria are weighted and normalized prior to calculating the result of the ranking-and-decision policy (Ren: [0109]-[0114] and [0127] Provides for priority-based weighting (some models have "higher priority" than others) and confidence probability calculations.) In reference to claim 14, The system of claim 11, wherein the at least one processor is configured to execute the computer-executable instructions to cause the software-defined network controller to: implement a safeguard manager including a trust controller agent, wherein the trust controller agent defines a first list of evaluation criteria, updates a second list of existing trust manager alternatives, and defines an order of importance of each of the evaluation criteria (Ren: [0095]-[0102] Provides for implementing specialized management components (model manager 830, CU-CP) that configure evaluation criteria (model status detection methods, content to report, resources, timers), maintain lists of model alternatives, and define priority ordering.) In reference to claim 15, The system of claim 11, further comprising: at least one machine learning trust manager coupled to the software-defined network controller; and at least one deterministic trust manager coupled to the software-defined network controller (Bhalla: [0105]-[0112] Provides for ML-based and deterministic systems coupled to the network controller.) In reference to claim 16, The system of claim 14, wherein: the order of importance of particular evaluation criteria are determined according to at least one of a user preference or application performance requirements (Ren: [0109]-[0118] and [0127] Provides for priority determination based on application performance requirements.) In reference to claim 17, The system of claim 11, wherein the at least one processor is configured to execute the computer-executable instructions to cause the software-defined network controller to: identify actual or suspected misuse by comparing the result of the ranking-and-decision policy to a threshold value (Bhalla: [0107]-[0112] Provides for threshold-based evaluation to identify problematic ML behavior.) In reference to claim 18, The system of claim 11, wherein the at least one processor is configured to execute the computer-executable instructions to cause the software-defined network controller to: periodically apply the ranking-and-decision policy to each of a plurality of machine learning trust managers; and create an ordered list of the plurality of machine learning trust managers based on results of applying the ranking-and-decision policy to each of the plurality of machine learning trust managers (Ren: [0100]-[0109] and [0127] Provides for periodic application of evaluation procedures to multiple ML models through timer-based and periodic reporting configurations.) In reference to claim 19, A device comprising: at least one processor; memory coupled to the at least one processor, the memory storing computer-executable instructions wherein the at least one processor is configured to execute the computer-executable instructions (Bhalla: [0030], [0031], [0107], and [0132]-[0136] Provides for a controller for a network that includes a processor and memory with instructions for execution.) identify in near real-time actual or suspected misuse of one or more machine learning managers, and in response to identifying the actual or suspected misuse of the one or more machine learning managers, deactivating the one or more machine learning managers, and activating a deterministic manager in place of the one or more machine learning managers (Bhalla: [0107], [0111-0112] and [0124-0125] Provides for a safeguard module's evaluation process which is functionally equivalent to a "ranking-and-decision policy". Bhalla: [0111]-[0112] and [0120]-[0125] Provides for deactivating the AI-based system when it fails to meet performance thresholds and switching to a deterministic algorithm instead.) Bhalla teaches safeguard modules which function similarly to a "machine learning trust manager" by evaluating the reliability and trustworthiness of ML algorithm outputs but doesn't explicitly use the term "trust manager" or explicitly teach the monitoring of the “trust manager” itself. However, Ren teaches: Wherein the machiner learning manager is a trust manager and the monitoring of the trust manager itself (Ren: [0038] and [0135]-[0136] Provides for monitoring the status/performance of a machine learning model used in wireless communications.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla, which provides a software-defined network controller that monitors machine learning systems and switches to deterministic algorithms when performance thresholds are not met, with the teachings of Ren, which introduces specific monitoring of machine learning trust managers and their performance characteristics. One of ordinary skill in the art would recognize the ability to incorporate Ren's trust manager monitoring approach into Bhalla's system to create more specialized and targeted machine learning oversight. One of ordinary skill in the art would be motivated to make this modification in order to improve network security by specifically monitoring trust-related machine learning functions In reference to claim 20, The device of claim 19, wherein: the at least one processor is further configured to execute the computer-executable instructions to cause the device to identify the actual or suspected misuse of the one or more machine learning trust managers based on one or more of an external alarm received from an external source, or a ranking-and-decision policy applied to the one or more machine learning trust managers, wherein the ranking-and-decision policy is based on evaluation criteria including one or more of a detection error rate, a runtime, fairness between clients of a communication network, or compliance with service level agreements (Ren: [0088], [0109]-[0114] and [0127] Provides for technical evaluation criteria.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla et al. (US 20200259700 A1, referred to as Bhalla), in view of Ren et al. (US 20240147267 A1, referred to as Ren) in further view of Bernat et al. (US 20220114251 A1, referred to as Bernat). In reference to claim 4, The software-defined network controller of claim 1, wherein the machine learning trust manager fails to satisfy the performance threshold if the machine learning trust manager is determined to be incapable of rendering latency-sensitive trust decisions within a threshold time (Bernat: [0035], [0153], [0159] and [0190] Provides for latency-sensitive decisions and time thresholds for SLOs. It describes systems that must make decisions within strict time constraints.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla in view of Ren, which together provide a software-defined network controller with machine learning trust manager monitoring and performance evaluation capabilities, with the teachings of Bernat, which introduces latency-sensitive decision making and time threshold requirements for system performance. One of ordinary skill in the art would recognize the ability to incorporate Bernat's time-critical performance criteria into the combined trust management system to ensure real-time responsiveness in network security decisions. One of ordinary skill in the art would be motivated to make this modification in order to maintain network performance by ensuring trust decisions are made within acceptable time limits. In reference to claim 5, The software-defined network controller of claim 1, wherein the machine learning trust manager fails to satisfy the performance threshold if the machine learning trust manager fails to achieve a threshold level of fairness related to bandwidth allocation among network clients (Bernat: [0121], [0139] and [0203]-[0206] Provides for resource allocation mechanisms including "equal share contracts" and arbitration systems that evaluate resource distribution.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla in view of Ren, which together provide a software-defined network controller with machine learning trust manager monitoring and performance evaluation capabilities, with the teachings of Bernat, which introduces fairness criteria for resource allocation and bandwidth distribution among network clients. One of ordinary skill in the art would recognize the ability to incorporate Bernat's fairness-based performance metrics into the combined trust management system to ensure equitable network resource distribution. One of ordinary skill in the art would be motivated to make this modification in order to prevent discriminatory or biased machine learning decisions that could unfairly allocate bandwidth among users. In reference to claim 6, The software-defined network controller of claim 1, wherein the machine learning trust manager fails to satisfy the performance threshold if the machine learning trust manager allows a threshold number of clients to disregard subscriber service level agreements (Bernat: [0033], [0034], [0185] and [0196] Provides for SLA violations as a metric for evaluating system components (operators). It describes monitoring for SLA compliance and using SLA violations as part of reputation scoring.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla in view of Ren, which together provide a software-defined network controller with machine learning trust manager monitoring and performance evaluation capabilities, with the teachings of Bernat, which introduces service level agreement compliance monitoring and violation tracking as performance metrics. One of ordinary skill in the art would recognize the ability to incorporate Bernat's SLA compliance criteria into the combined trust management system to ensure adherence to contractual service obligations. One of ordinary skill in the art would be motivated to make this modification in order to maintain service quality by preventing machine learning systems from making decisions that lead to violations. In reference to claim 7, The software-defined network controller of claim 1, wherein the at least one processor is further configured to execute the computer-executable instructions (Bhalla: [0022] and [0116] Provides for SDN controller with processor executing instructions - same as previous claims.) Receiving the alarm, deactivate the machine learning trust manager and activate the deterministic trust manager in place of the machine learning trust manager (Bhalla: [0124]-[0125] Provides for deactivating ML systems and switching to deterministic algorithms in response to detected issues.) Bhalla in view of Ren does not explicitly disclose receiving an external alarm from an external source indicating that misuse of the machine learning trust manager has been detected. However, Bernat teaches: Receive an external alarm from an external source indicating that misuse of the machine learning trust manager has been detected ([0147], [0154], [0187], [0224] and [0233] Provides for alert and notification systems for detecting when components are operating outside acceptable bounds.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bhalla in view of Ren, which together provide a software-defined network controller with machine learning trust manager monitoring and automatic deactivation capabilities, with the teachings of Bernat, which introduces external alarm systems for detecting component misuse and out-of-bounds operation. One of ordinary skill in the art would recognize the ability to incorporate Bernat's external monitoring and alert capabilities into the combined trust management system to enable detection of trust manager misuse from independent sources. One of ordinary skill in the art would be motivated to make this modification in order to provide comprehensive security monitoring that includes external oversight of machine learning trust decisions. In reference to claim 8, The software-defined network controller of claim 7, wherein the at least one processor is further configured to execute the computer-executable instructions to cause the software-defined network controller to: implement a trust controller agent and a selector agent, (Ren: [0055]-[0056] and [0095] Provides for multiple distinct functional components (CU-CP, model manager, DU) that work together to manage ML model operations.) wherein the trust controller agent transmits an internal alarm to the selector agent in response to determining that the machine learning trust manager fails to satisfy the performance threshold or displays evidence of misuse (Ren: [0112] Provides for internal communication between system components when performance thresholds are not met.) the selector agent receives at least one of the internal alarm or the external alarm (Ren: [0112]-[0115] Provides for receiving both internal failure indications and external network-triggered reports.) in response to receiving the at least one of the internal alarm or the external alarm, the selector agent deactivates the machine learning trust manager and activates the deterministic trust manager in place of the machine learning trust manager (Ren: [0112]-[0120] Provides for receiving failure alarms and then triggering deactivation of ML models and activation of fallback systems.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Applicant’s amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN EDWARD SHAUGHNESSY whose telephone number is (703)756-1423. The examiner can normally be reached on Monday-Friday from 7:30am to 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson, can be reached at telephone number (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/usptoautomated-interview-request-air-form. /A.E.S./Examiner, Art Unit 2432 /Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Aug 13, 2025
Non-Final Rejection — §101, §103
Nov 17, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574412
METHOD AND SYSTEM FOR PROCESSING AUTHENTICATION REQUESTS
2y 5m to grant Granted Mar 10, 2026
Patent 12339956
ENDPOINT ISOLATION AND INCIDENT RESPONSE FROM A SECURE ENCLAVE
2y 5m to grant Granted Jun 24, 2025
Patent 12225029
AUTOMATIC IDENTIFICATION OF ALGORITHMICALLY GENERATED DOMAIN FAMILIES
2y 5m to grant Granted Feb 11, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+71.4%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month