Prosecution Insights
Last updated: April 19, 2026
Application No. 17/532,452

RECOMMENDING CONFIGURATION CHANGES IN SOFTWARE-DEFINED NETWORKS USING MACHINE LEARNING

Non-Final OA §103
Filed
Nov 22, 2021
Examiner
DU, ZONGHUA A
Art Unit
2444
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
7 (Non-Final)
60%
Grant Probability
Moderate
7-8
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
47 granted / 78 resolved
+2.3% vs TC avg
Strong +46% interview lift
Without
With
+45.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
22 currently pending
Career history
100
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
60.9%
+20.9% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the communication filed on 01/21/2026. Claims 1-5, 7, 9-15, 17 and 19-24 are pending in this application. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered. Response to Amendment The claim rejections under 35 U.S.C. 112(b) to claims 1-5, 7, 9-15, 17 and 19-20 are now withdrawn in view of the claim amendments. Applicant’s arguments with respect to claims 1-5, 7, 9-15, 17 and 19-24 have been considered but are moot based on the new grounds of rejection necessitated by Applicant’s amendments. Specifically, the arguments present that the combination of the cited arts fails to provide for the amended language, where the rejection below now relies on Wohlert to teach this subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-5, 7, 9-11, 14-15, 17, 19-20 and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith (US 20190036776 A1, published 01/31/2019; hereinafter Smith), in view of Vasseur et al. (US 20190306023 A1, published 10/03/2019; hereinafter Vasseur), in view of Tan et al. (US 20190205749 A1, published 07/04/2019; hereinafter Tan), and in further view of Wohlert et al. (US 20170195171 A1, published 07/06/2017; hereinafter Wohlert). For Claim 1, Smith teaches a method comprising (Smith ¶ 0023 “… The method may further include receiving one or more proposed changes to the SDN via an interface … The method may further include generating a change index based on a comparison of the first set of values with the second set of values …”): associating, by a device (Smith ¶ 0047 a control device), application performance of an online application (Smith ¶ 0047 application’s traffic flow or accessibility) with network configuration changes (Smith ¶ 0047 configuration changes to the SDN) implemented across one or more software-defined networks (Smith FIG. 1; ¶ 0047 “… the control device 120 may compare historical data of the current state of the SDN with the potential state of the SDN. For example, in these and other embodiments, the proposed changes may impact applications which are used by the edge network devices 110 of the SDN. For example, in some embodiments, the proposed changes may affect traffic flow related to a social media application, which may be considered a low priority application. Alternatively or additionally, in some embodiments, the proposed changes may affect traffic flow or accessibility of a productivity application, which may be considered a high priority application. The control device 120 may determine which types of applications are affected by the proposed changes to the SDN …”); Smith does not explicitly teach, but Vasseur teaches … training, by the device and based on differences in configuration of different portions of the one or more software-defined networks (Vasseur ¶ 0057), … a second model (Vasseur, one of the machine learning models in the machine learning-based analyzer as exemplified in FIG. 5) … that predicts effects of the possible configuration changes on the application performance for any given portion of the one or more software-defined networks (Vasseur FIG. 3, FIG. 5; ¶ 0048 “… Machine learning-based analyzer 312 may include any number of machine learning models to perform the techniques herein, such as for cognitive analytics, predictive analysis, and/or trending analytics …”; ¶ 0057 “… Configuration changes to networking devices, particularly those involving software updates, are prone to introducing unexpected device behaviors into the network and sometimes even leading to catastrophic device or network failure. However, identifying such behavioral changes and their impact on the network are very challenging. First, the configuration change may react differently at different types of devices … Further, the traffic conditions themselves may dramatically affect the overall behavior of a network, meaning that identical networking device configurations in different networks can still result in different network behaviors …”; ¶ 0061 “… a network assurance service that monitors one or more networks receives data indicative of networking device configuration changes in the one or more networks. The service also receives one or more performance indicators for the one or more networks. The service trains a machine learning model based on the received data indicative of the networking device configuration changes and on the received one or more performance indicators for the one or more networks. The service predicts, using the machine learning model, a change in the one or more performance indicators that would result from a particular networking device configuration change …”; also see Vasseur FIG. 7, ¶ 0119 and ¶ 0120); …; …; generating, … by the device …, a recommended configuration change (Vasseur ¶ 0073 a proposed configuration change) for a particular portion of the one or more software-defined networks, wherein the recommended configuration change … to improve performance of the particular portion of the one or more software-defined networks (Vasseur teaches providing the proposed configuration change based on the evaluation of various state changes associating with various configuration changes from a plurality of networks; FIG. 5; ¶ 0073 “… Architecture 500 may also include CIP 508 (i.e. change impact predictor) that is configured to predict the impact of a configuration change on one or more of the performance indicators. For example, CIP 508 may offer an interactive application program interface (API) via output and visualization interface 318 that allows the user of the UI to evaluate the impact of a proposed configuration change on a given performance indicator To make such predictions, CIP 508 may leverage the models or distributions of CDE 506 (i.e. change detector engine) to simulate a change from state S to the state S' that would result from enacting the proposed configuration change. From such a simulation, CIP 508 may determine whether the proposed change would be detrimental to the performance indicator(s). Note that CIP 508 may require, in some cases, data from a plurality of monitored networks, so that more information can be collected by the service about the different possible states … ”; ¶ 0074 “… CIP 508 may evaluate various ‘popular’ changes S->S', S->S", etc. in order to suggest improvements to the customer. For instance, it may propose an upgrade to a new AP (i.e. access point) release to a given customer by letting him know that they could have 20% less throughput issues because they are running an older release …”; ¶ 0096 “… Note that degradation analyzer 510 can also be used to provide feedback to change impact predictor (CIP) 508, so as to improve the model(s) of CIP 508. For example, by assessing the performance impact of a particular software release on one network, a predictive model of CIP 508 can be updated to better predict the impact of that release on another network …”; also see Vasseur FIG. 7, ¶ 0121); and causing, by the device, the recommended configuration change to be implemented in the particular portion of the one or more software-defined networks (Vasseur FIG. 5; ¶ 0075 “… Architecture 500 may also include a degradation analyzer 510 which is configured to determine whether there was any performance degradation of the networking device (or in the network) after the configuration changes were applied …”). Smith and Vasseur are analogous art because they are both related to analyzing the effect of a network configuration change. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the machine learning based analysis techniques of Vasseur with the system of Smith to cause the “networking device configuration change to be made in the network based on the predicted performance indicators (Vasseur ¶ 0011). Smith-Vasseur does not explicitly teach, but Tan teaches jointly training a first model comprising a generative network that generates possible configuration changes that includes routing or policy changes (Tan teaches that a network generative model is trained to generate fabricated network packet attributes corresponding to various network states (e.g. for various network configurations), and the various fabricated network packet attributes are considered to correspond to various network configurations based on the network devices states/configuration changes, the different network configurations are evaluated to adjust the quality of service settings for better optimizing a network environment) and a second model comprising an evaluation network that predicts effects of the possible configuration changes on the application performance (Tan also teaches that various application models (i.e. classical neural network models) are trained to use the fabricated network packet attributes to generate predictive experience metrics to measure the application performance, FIGS 1-4, ¶ 0014 “… With reference to FIG. 2, illustrated is neural network processing system 100 that is configured to predict application performance of a client device application program in order to optimize WLAN performance. As illustrated in FIG. 2, the system 100 may include a network generative model 110 and one or more application models 120(1)-120(L) …”; ¶ 0015 “… The system 100 may be configured to observe or collect various network states (e.g., real attributes of actual traffic or network packets) of a network device (e.g., routers 30(1)-30(N), APs 40(1)-40(K) as a training dataset 130 for use by the network generative model 110 and/or the application models 120(1)-120(L) … After observing and collecting real attributes of actual network packets for a period of time, the training dataset 130 is utilized by the network generative model 110 to perform machine learning to generate one or more fabricated network packet attributes 140 for a set of artificial traffic that mimics attributes of actual traffic processed by the observed network device …”; ¶ 0016 “… As further illustrated in FIG. 2, the network generative model 110 may send the fabricated network packet attributes 140 to one or more of the application models 120(1)-120(L). Each application model 120(1)-120(L), as further detailed below, may be a classical neural network model. Each application model 120(1)-120(L) may be modeled after a specific application program running on a client device 50(1)-50(M) that is capable of connecting to an AP. With each application model 120(1)-120(L) being modeled on a specific application program, each application model 120(1)-120(L) is configured to generate a predictive experience metric 150(1)-150(L) that is unique for its respective application program … The generated predictive experience metrics 150(1)-150(L) represent a calculated metric of a predicted performance of a respective application program when given a set of network packet attribute inputs …”; ¶ 0032 “… the system 100 may be trained to generate predictive experience metrics 150 for applications, where the predictive experience metrics 150 are conditioned on different network configurations (e.g., treating the traffic flow as prioritized classes). Consequently, this information can be used to guide the server 60 or Aps 40(1)-40(K) to dynamically and automatically adjust the quality of service (QoS) configurations to best maximize the expected performance of all network traffic flows sharing the same AP 40(1)-40(K) …”); generating, by the device (Tan, the server as exemplified in FIG. 1) and by using the first model and the second model, a recommended configuration change, wherein the recommended configuration change is generated by the first model and comprises a change to a routing policy, further wherein the recommended configuration change is predicted by the second model to improve performance of the particular portion of the one or more software-defined networks (Tan, FIGS 1-4, ¶ 0028 “… Once the application model 120 generates the predictive experience metric distribution 150, the system 100 may alter one or more configurations of the network environment 10 to better optimize the network environment 10. In one embodiment, where the client devices 50(1)-50(M) are mobile wireless devices having multiple modes of connectivity, such as, but not limited to, WiFi® and broadband cellular network connectivity (4G, 5G, etc.), the system 100 may be configured to recommend to each one of the client devices 50(1 )-50(M) connected to an AP 40(1 )-40(K) the optimal type of network connection when operating an associated application program …”; ¶ 0032 “… the system 100 may be trained to generate predictive experience metrics 150 for applications, where the predictive experience metrics 150 are conditioned on different network configurations (e.g., treating the traffic flow as prioritized classes). Consequently, this information can be used to guide the server 60 or Aps 40(1)-40(K) to dynamically and automatically adjust the quality of service (QoS) configurations to best maximize the expected performance of all network traffic flows sharing the same AP 40(1)-40(K) …”). Tan and Smith-Vasseur are analogous art because they are both related to analyzing the effect of a network configuration change. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the machine learning training techniques of Tan with the system of Smith-Vasseur to facilitate the accurate characterization of application performance over various network configurations (Tan ¶ 0002). Smith-Vasseur-Tan does not explicitly teach, but Wohlert teaches identifying, by the device (Wohlert exemplifies a configuration selector 105 in FIG. 1) and by using the first model (incorporating Tan’s network generative model), a first subgraph (Wohlert teaches a current network configuration represented by a graph form of the topology information) of the one or more software-defined networks that is similar to a second subgraph (Wohlert teaches a proposed network configuration representing an incremental change to the current network configuration) of the one or more software-defined networks (Examiner notes that the instant claim does not provide details concerning the limitation “subgraph,” such as the difference between a graph and a subgraph, or the scope that the subgraph represents for, etc.; FIG. 1, FIG. 7; ¶ 0017 “… The present disclosure broadly describes a method, a computer-readable storage device, and an apparatus for optimizing a software defined network configuration using a policy-based network performance metric …”; ¶ 0020 “… the configuration selector 105 includes an example graph database 125 to store network topology information representing the arrangement(s) of the network components 110 in the example network 100 … the nodes in the network components 110 can be represented in the graph database 125 with graph nodes and associated properties, the links in the network components 110 can be represented in the graph database 125 with graph edges and associated properties, and the interconnections of the nodes and links of the network components 110 can be represented in the graph database 125 using pointers between the appropriate graph nodes and graph edges …”; ¶ 0022 “… the OCDE 140 (i.e. optimal composition determination engine) utilizes the topology information stored in the example graph database 125 and policies specifying rules to be obeyed when evaluating different proposed configurations …”; ¶ 0067 “… proceeds to block 710, at which the example OCDE 140 of the configuration selector 105 identifies a proposed network configuration. In one example, the proposed network configuration is identified using the configuration identifier 230 of the example OCDE 140 and represents an incremental change to the current network configuration …”); determining, based on comparing the first subgraph to the second subgraph, difference data (Wohlert teaches determining the difference of network relative performance (i.e. NRP) parameters between the graph represented current network configuration and the graph represented proposed network configuration; FIG. 1, FIG. 2, FIG. 7; ¶ 0024 “… the OCDE 140 may determine, for a given network configuration, a performance parameter … for a given one of the network components 110 by processing the performance measurements stored in the graph database 125 for the given one of the network components 110 based on a weighting profile stored in the policy storage 145 for that service …”; ¶ 0028 “… for a given network component (e.g., node, link, etc.), the CRP determiner 210 determines a respective CRP (i.e. component relative performance) parameter for each service for which traffic (e.g., packets, flows, etc.) may be routed via the network component …”; ¶ 0072 “… At block 750, the OCDE 140 determines (e.g., according to Equation 4, as described above) network relative performance parameters for the proposed network configuration …”); generating, based on the difference data between the first subgraph and the second subgraph, a recommended configuration change (Wohlert teaches selecting the optimal configuration based on the difference of the NRP parameters; FIG. 1, FIG. 7; ¶ 0024 “… Finally, the example OCDE 140 may select a network configuration (e.g., a current network configuration or a proposed network configuration) based on comparing the performance parameters (e.g., the network relative performance parameters) determined for different network configurations …”; ¶ 0074 “… At block 755, the example OCDE 140 selects, based on the NRP parameters determined at block 750, the optimal configuration for the network 100. In one example, the optimal configuration is whichever of the current configuration and the proposed configuration has the higher-value NRP parameters …”). Wohlert and Smith-Vasseur-Tan are analogous art because they are both related to analyzing the effect of a network configuration change. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the optimizing a SDN configuration techniques of Wohlert with the system of Smith-Vasseur-Tan to facilitate “identify potential incremental changes to its configuration and estimating the effects of the potential incremental changes on overall network performance” (Wohlert ¶ 0017). For Claim 4, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1, wherein the one or more software-defined networks comprise at least two networks operated by different entities (Vasseur discloses networks from different Service Providers; FIG. 1; ¶ 0015 ¶ 0017 “… a router (or a set of routers) may be connected to a private network (e.g., dedicated leased lines, an optical network, etc.) or a virtual private network (VPN), such as an MPLS VPN thanks to a carrier network, via one or more links exhibiting very different network and service level agreement characteristics … Site Type B: a site connected to the network using two MPLS VPN links (e.g., from different Service Providers) …”). See motivation to combine for claim 1. For Claim 5, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1, wherein causing the recommended configuration change to be implemented in the particular portion of the one or more software-defined networks comprises: providing, by the device, the recommended configuration change for display (Vasseur FIG. 5; ¶ 0073 “… CIP 508 may offer an interactive application program interface (API) via output and visualization interface 318 that allows the user of the UI to evaluate the impact of a proposed configuration change on a given performance indicator …”). See motivation to combine for claim 1. For Claim 7, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1, wherein determining that the first subgraph is similar to the second subgraph is based one or more of: a geographic location, a device type, a software version, or a traffic pattern for the online application (Wohlert teaches that the graph represented proposed network configuration has incremental change from the graph represented current network configuration based on candidate paths that are related to a geographic location; FIG. 1; ¶ 0024 “… the OCDE 140 may determine respective performance parameters (e.g., corresponding to path relative performance parameters …) for respective candidate paths in the network 100 by combining the performance parameters (e.g., the component relative performance parameters) determined for those network components 110 included in the respective candidate paths …”). See motivation to combine for claim 1. For Claim 9, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1, wherein the one or more software-defined networks comprise at least one software-defined wide area network (SD-WAN) (Smith FIG. 1; ¶ 0020 “… the SDN may include a software-defined wide area network (SD-WAN), software-defined local area network (LAN), software-defined metropolitan area network (MAN), or any other type of network …”). For Claim 10, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1, wherein the online application is a software-as-a-service (SaaS) application (Smith FIG. 1; FIG. 2; ¶ 0049 “… the control device 120 may apply machine learning algorithms to the monitored data and the proposed changes in order to satisfy use-cases of network planning (e.g., forecasting), network operations (e.g., SLA policy recommendation, carrier selection), what-if analysis, and network security (anomaly detection) …”; ¶ 0063 “… The external resources 280 may include any computing service available for consumption by the system 200. For example, the external resources 280 may include a cloud-based service such as a software subscription or software as a service (SaaS) …”). For Claim 11, the claim is substantially similar to claim 1 and therefore is rejected for the same reasoning set forth above. Additionally, Smith-Vasseur-Tan-Wohlert teaches an apparatus, comprising: one or more network interfaces; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor (Smith FIG. 4; ¶ 0088 “… The computing system 400 may include a processor 410, a memory 420, a data storage 430, and a communication unit 440, which all may be communicatively coupled …”; ¶ 0089 “… the processor 410 may include any suitable special-purpose or general-purpose computer, computing entity, or processing device including various computer hardware or software modules and may be configured to execute instructions stored on any applicable computer-readable storage media …”). For Claim 14, the claim is substantially similar to claim 4 and therefore is rejected for the same reasoning set forth above. For Claim 15, the claim is substantially similar to claim 5 and therefore is rejected for the same reasoning set forth above. For Claim 17, the claim is substantially similar to claim 7 and therefore is rejected for the same reasoning set forth above. For Claim 19, the claim is substantially similar to claim 9 and therefore is rejected for the same reasoning set forth above. For Claim 20, the claim is substantially similar to claim 1 and therefore is rejected for the same reasoning set forth above. Additionally, Smith-Vasseur-Tan-Wohlert teaches a tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process (Smith FIG. 4; ¶ 0092 “… The memory 420 and the data storage 430 may include computer-readable storage media or one or more computer-readable storage mediums for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may be any available media that may be accessed by a general-purpose or special-purpose computer, such as the processor 410 …”; ¶ 0093 “… such computer- readable storage media may include non-transitory computer-readable storage media …”). For Claim 23, the claim is substantially similar to claim 4 and therefore is rejected for the same reasoning set forth above. For Claim 24, the claim is substantially similar to claim 5 and therefore is rejected for the same reasoning set forth above. Claim Rejections - 35 USC § 103 Claims 2, 12 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith (US 20190036776 A1, published 01/31/2019; hereinafter Smith), in view of Vasseur et al. (US 20190306023 A1, published 10/03/2019; hereinafter Vasseur), in view of Tan et al. (US 20190205749 A1, published 07/04/2019), in view of Wohlert et al. (US 20170195171 A1, published 07/06/2017; hereinafter Wohlert), and in further view of Vasseur et al. (US 20150333953 A1, published 11/19/2015; hereinafter Vasseur2). For Claim 2, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1. Smith-Vasseur-Tan-Wohlert does not explicitly teach, but Vasseur2 teaches wherein the application performance of the online application is quantified based on service level agreement violations by network paths that are used to convey traffic associated with the online application (Vasseur2 discloses quantifying application performance based on determining if the predicted performance of network paths satisfying an SLA; FIG. 9; ¶ 0081 “… Example metrics that may be used as input to the learning machine and/or may be predicted by the learning machine may include, but are not limited to, path delays, available bandwidth, jitter, packet loss, combinations thereof, or any other value that may be used to quantify the performance of the network path …”; ¶ 0082 “… At step 920, a decision is made as to whether or not the predicted performance of the primary path from step 915 satisfies an SLA, as highlighted above. In some cases, the SLA may be on a per-application basis …”). Vasseur2 and Smith-Vasseur-Tan-Wohlert are analogous art because they are both related to network analysis. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the application performance evaluation techniques of Vasseur2 with the system of Smith-Vasseur-Tan-Wohlert to help to predict performance of a primary path to satisfy the SLA of the application (Vasseur2, ¶ 0083). For Claim 12, the claim is substantially similar to claim 2 and therefore is rejected for the same reasoning set forth above. For Claim 21, the claim is substantially similar to claim 2 and therefore is rejected for the same reasoning set forth above. Claim Rejections - 35 USC § 103 Claims 3, 13 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith (US 20190036776 A1, published 01/31/2019; hereinafter Smith), in view of Vasseur et al. (US 20190306023 A1, published 10/03/2019; hereinafter Vasseur), in view of Tan et al. (US 20190205749 A1, published 07/04/2019), in view of Wohlert et al. (US 20170195171 A1, published 07/06/2017; hereinafter Wohlert), and in further view of Fedor et al. (US 20130290525 A1, published 10/31/2013; hereinafter Fedor). For Claim 3, Smith-Vasseur-Tan-Wohlert teaches the method as in claim 1. Smith-Vasseur-Tan-Wohlert does not explicitly teach, but Fedor teaches wherein the application performance of the online application is quantified based on feedback provided by users of the online application (Fedor discloses calculating service performance measure associating to users’ feedback; ¶ 0029 “… the system automatically updates mapping between individual service measures (i.e. S-KPIs) and overall service performance measure (i.e. QoSS) so that the calculated QoSS could be as close to the users’ subjective measure of service quality ( expressed by the so-called UR- QoSS classes) as possible. To do that the system collects user feedback about service performance (UR-QoSS) from a sample of service users …”). Fedor and Smith-Vasseur-Tan-Wohlert are analogous art because they are both related to network analysis. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the application performance evaluation techniques of Fedor with the system of Smith-Vasseur-Tan-Wohlert to help to evaluate Quality of Experience of a service that is provided to users (Fedor, ¶ 0003). For Claim 13, the claim is substantially similar to claim 3 and therefore is rejected for the same reasoning set forth above. For Claim 22, the claim is substantially similar to claim 3 and therefore is rejected for the same reasoning set forth above. Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed below, thank you: i. Higgins et al. (US 7120680 B1) teaches that mechanisms and techniques operate in a computerized device to provide a network analyzer that identifies a useable network configuration in an existing network configuration. The network analyzer receives a preferred network configuration defining a preferred network topology and analyzes an existing network configuration to produce an existing network topology. The network analyzer then compares the preferred network topology to the existing network topology, for example using a graph matching technique, to identify a useable network configuration within the existing network configuration that most closely supports operation of the preferred network configuration (Higgins, Abstract). ii. Simaria et al. (US 10382272 B1) teaches that an example network device includes a memory configured to store existing configuration information formatted according to a high level structured input format for the network device, and a processor comprising digital logic circuitry and configured to receive data defining new configuration information formatted according to the high level structured input format, determine one or more differences between the new configuration information and the existing configuration information, translate the one or more differences into one or more sets of data defining device level configuration changes for the network device without translating the entire new configuration information, and configure the network device to update existing device level configuration for the network device according to the sets of data defining the device level configuration changes (Simaria, Abstract). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZONGHUA DU whose telephone number is (408)918-7596. The examiner can normally be reached Monday - Friday 8 AM - 5 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on (571) 272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Z.D./Examiner, Art Unit 2444 /SCOTT B CHRISTENSEN/Primary Examiner, Art Unit 2444
Read full office action

Prosecution Timeline

Nov 22, 2021
Application Filed
Apr 19, 2023
Non-Final Rejection — §103
Jul 10, 2023
Interview Requested
Jul 25, 2023
Examiner Interview Summary
Jul 25, 2023
Response Filed
Jul 25, 2023
Applicant Interview (Telephonic)
Aug 31, 2023
Final Rejection — §103
Oct 10, 2023
Interview Requested
Oct 25, 2023
Examiner Interview Summary
Oct 25, 2023
Applicant Interview (Telephonic)
Dec 08, 2023
Request for Continued Examination
Dec 11, 2023
Response after Non-Final Action
Dec 22, 2023
Non-Final Rejection — §103
Feb 23, 2024
Interview Requested
Mar 11, 2024
Examiner Interview Summary
Mar 11, 2024
Applicant Interview (Telephonic)
Apr 08, 2024
Response Filed
Jun 22, 2024
Final Rejection — §103
Dec 17, 2024
Interview Requested
Dec 27, 2024
Applicant Interview (Telephonic)
Dec 27, 2024
Examiner Interview Summary
Dec 30, 2024
Request for Continued Examination
Jan 13, 2025
Response after Non-Final Action
Jan 25, 2025
Non-Final Rejection — §103
Apr 17, 2025
Interview Requested
Apr 23, 2025
Applicant Interview (Telephonic)
Apr 23, 2025
Examiner Interview Summary
Apr 30, 2025
Response Filed
Jul 22, 2025
Final Rejection — §103
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 27, 2026
Examiner Interview (Telephonic)
Mar 03, 2026
Non-Final Rejection — §103
Apr 13, 2026
Examiner Interview Summary
Apr 13, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603929
Metrics Collection And Reporting In 5G Media Streaming
2y 5m to grant Granted Apr 14, 2026
Patent 12592861
ADAPTIVE BATCH PROCESSING METHOD AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12562961
OPERATING AN AUTOMATION SYSTEM OF A MACHINE OR AN INSTALLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12476892
METHOD AND SYSTEM FOR SELECTING DATA CENTERS BASED ON NETWORK METERING
2y 5m to grant Granted Nov 18, 2025
Patent 12469289
VIDEO GENERATION USING A HEADLESS BROWSER
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+45.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month