Prosecution Insights
Last updated: April 19, 2026
Application No. 18/049,159

MULTI-DOMAIN NETWORK DATA FLOW MODELING

Final Rejection §101§103
Filed
Oct 24, 2022
Examiner
DINH, DUNG C
Art Unit
6214
Tech Center
6200
Assignee
Qualcomm Incorporated
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+40.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
3 currently pending
Career history
4
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4 – 11 and 13 – 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims 1, 10, 19, and 25 recite the limitations of generating a data flow model associated with a network entity of a multi-domain network, obtaining the set of flow rates that are calculated based at least in part on the data flow model, and then selectively updating the flow model based at least in part on the accuracy of the data flow model, with accuracy determined based at least in part on measured and expected flow rates. The claims recite these limitations, as drafted, as a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic network computer components. That is, other than reciting “by the network entity/non-transitory computer-readable medium/apparatus,” nothing in the claim element precludes the step from practically being performed in the mind. For example, for the “by the network entity/non-transitory computer-readable medium/apparatus” language, the claims may encompass a user simply comparing measured and expected flow rates for their accuracy to a predetermined threshold (e.g., a known expected rate) in their mind, and then updating the network model using the given flow values according to an algorithm. Obtaining measured and expected flow rates amounts to no more than mere data gathering. See MPEP 2106.04(a)(2)(III)(C) . The mere nominal recitation of network components does not take the claim limitations out of the abstract idea grouping of mental processes. As per the interpretation of claim 25 under 35 USC § 112(f), the specification includes the performance of the claimed functions using a general purpose computer and an algorithm, which does not overcome this analysis. This judicial exception is not integrated into a practical application because the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. In essence, the claims are performed at a high level of generality, and amount to generic computer network components performing analysis at a high level. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they do not impose any meaningful limits on performing the abstract idea. In other words, the functions performed by generic computer components amount to well-understood, routine, conventional computer functions. See MPEP § 2106.05(d)). In addition, the claims recite an abstract idea without significantly more than the abstract idea itself. As noted with the abstract idea analysis, the claims amount to the mental processes of generating a data model, obtaining measured and expected flow rates, and updating the model based on the accuracy of the determined flow rates. But for generic computer components, nothing is recited in the claims to transform the abstract idea into a practical application or beyond what is considered well-understood, routine, and conventional. See MPEP §§ 2106.05(a), 2106.05(d) Claims 2 and 11 additionally recites the feature of performing data flow management based on the data flow model. The limitation itself is recited at a high level of generality, without specific detail outlining the data flow management that would preclude instruction from the human mind to perform the flow management. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). Claims 4, 13, 20, and 26 additionally recites the feature of generating an updated data flow model based at least in part on iteratively updating the data flow model. The limitation itself is recited at a high level of generality, without specific detail outlining the particulars of improving the model itself on an iterative basis. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). As per the interpretation of claim 26 under 35 USC § 112(f), the specification includes the performance of the claimed functions using a general purpose computer and an algorithm, which does not overcome this analysis. Claims 5, 14, 21, and 27 additionally recites iteratively updating the flow model until a condition (i.e., an accuracy threshold based on the difference of measured and expected flow rates) is satisfied. Adding a mere conditional constraint based on the difference between two variables compared to a threshold does not limit the claim outside of the abstract idea category of mental processes. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). As per the interpretation of claim 27 under 35 USC § 112(f), the specification includes the performance of the claimed functions using a general purpose computer and an algorithm, which does not overcome this analysis. Claims 6, 15, 22, and 28 additionally recites iteratively updating the flow model based on the presence of a bottleneck link outside of the domain. Updating the model based on the detection of the well understood idea of a network bottleneck does not limit the claim outside of the abstract idea category of mental processes. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). Claims 7 and 16 additionally recites iteratively using the data model to indicated the presence of bottleneck and non-bottleneck links. Using a high level network model to identify a network bottleneck links does not limit the claim outside of the abstract idea category of mental processes. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). Claims 8, 17, 23, and 29 additionally recites calculating expected and measured flow rates using capacity of links and observed transmission rate through the links. Calculating measured and expected flow rates through the use of standard network statistics does not limit the claim outside of the abstract idea category of mental processes. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). Claims 9, 18, 24, and 30 additionally recites identifying a data flow path with an expected flow rate greater than the measured flow rate and adding a virtual link associated with the data flow path to the model. Comparing expected flow rates to measured flow rates and adding virtual links to alleviate congestion does not limit the claim outside of the abstract idea category of mental processes. In addition, the limitation fails to go beyond the principle of merely applying an application and does not recite a practical application or amount to significantly more. See MPEP 2106.04(d)(I). As per the interpretation of claim 25 under 35 USC § 112(f), the specification includes the performance of the claimed functions using a general purpose computer and an algorithm, which does not overcome this analysis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 3, 4, 7, 8, 10, 11, 12, 13, 16, 17, 19, 20, 23, 25, 26, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Liron et al (US 5598532 A) hereinafter “Liron”, in view of Yadav et al. (US 20180359172 A1) hereinafter “Yadav”, in view of Ganesh et al. (US 20170288991 A1) hereinafter “Ganesh”. Regarding claim 1, Liron teaches a method performed by a network entity, comprising: generating a data flow model for a domain associated with the network entity ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." FIG. 2 step 103 states a network model is made according to collected network data, that includes traffic flows.), obtaining a set of measured flow rates and a set of expected flow rates that are calculated based at least in part on the data flow model ([Column 4, Lines 36-46] "Referring to FIG. 2, optimization method 100 collects 101 data describing network 11, including its topology and traffic flow patterns. Data collection is accomplished in the preferred embodiment using segment collectors 17 on the LAN segments 13 of interest; information about other LAN segments 13 can be ignored if desired. These segment collectors 17 collect data local to their attached LAN segment 13. Some conventional networks 11 use segment collectors 17 (e.g. SNMP, RMON, network analyzers, etc.) which may relay the collected information to a network management console 23." The collection of data describing a network includes collecting traffic flow patterns. [Column 6, Lines 12-22] "Optimization process 200 determines 209 the amount of network traffic known to flow between the given clients 15 and R over a given time t. These traffic flow amounts are stored in TRAFFIC. TRAFFIC has C elements, C being the number of clients 15 in CLIENTS. The j-th entry in TRAFFIC comprises TRAFFIC[j, client.sub.-- r] for the unidirectional traffic flow (for example in bytes) from CLIENT[j] to R, and TRAFFIC[j, r.sub.-- client] for the traffic from R to CLIENT[j]. TRAFFIC is formed from network model 27 using actual measurements, or through estimation, for example, using simulation by simulator 37.” Known flow is interpreted by the examiner as expected flow. [Column 11, Lines 11-24] "which assigns a score dependent on the values of the traffic variables A, B, C. For example, Eq. (9) can define a function for scoring hub(p) based on the delays incurred when going through a switching element 19 between the partitions on LAN segment 13 at hub(p). The delays in and between partitions PA(p) and PB(p) are themselves functions of the traffic values A, B, and C. The scoring function can assign a score which is equal to the average byte delay depending on which of PA(p), PB(p) or the switching element 19 participate in transmitting the byte from its source node 16 to the destination node 16. The functions can be based on either actual network traffic and delay data acquired by segment collectors 17, or thorough simulation by simulator 39." Actual network data is interpreted by the examiner as the measured flow rates.). However, Liron does not teach selectively updating the data flow model based at least in part on an accuracy of the data flow model, with the accuracy determined based at least in part on the set of measured flow rates and the set of expected flow rates, nor that the domain in which the model is representing is part of a multi-domain network. Yadav, in the same field of endeavor, teaches updating the data flow model based at least in part on an accuracy of the data flow model, with the accuracy determined based at least in part on the set of measured flow rates and the set of expected flow rates ([0004] "A non-transitory computer-readable medium storing instructions, the instructions comprising one or more instructions that, when executed by one or more processors of a network administration device, cause the one or more processors to receive first operational information regarding a first set of network devices; receive first flow information relating to a first set of traffic flows associated with the first set of network devices; generate a model, based on a machine learning technique, to identify predicted performance of the first set of network devices with regard to the first set of traffic flows; receive or obtain second operational information and/or second flow information regarding the first set of network devices or a second set of network devices; determine path information for the first set of traffic flows or a second set of traffic flows using the model and based on the second operational information and/or the second flow information; configure the first set of network devices or the second set of network devices to implement the path information; and/or update the model based on a machine learning technique and based on observations after the path information is implemented." Yadav teaches a method and device that receives information relating to the traffic flow between devices, and then generates a model using a machine learning technique. The examiner interprets this model based on traffic flow to be a network data flow model. [0018] "FIGS. 1A-1D are diagrams of an overview of example implementations 100 described herein. As shown in FIG. 1A, and by reference number 102, a network administration device (shown as NAD) may receive a training set of flow information, network topology information, and operational information regarding a plurality of network devices of a network. The network administration device may receive the flow information, the network topology information, and the operational information to generate a model for determining traffic flow paths in the network or another network." Paragraph 0018 further solidifies the examiner’s interpretation that the model is one representative of a network topology and the data flows within, i.e. a network data flow model. [0034] "As shown by reference number 138, the network administration device may compare the updated operational information and/or the updated flow information to the predicted performance information outputted by the path selection model. As shown by reference number 140, the network administration device may update the path selection model using machine learning and based on the comparison of the updated operational information and/or the updated flow information to the predicted performance information. For example, machine learning may provide a mechanism for dynamically or iteratively improving the path selection model in view of results of using the path selection model. When observed results deviate from predicted results, the network administration device may adjust the path selection model using a machine learning algorithm to improve accuracy of the predicted results to better match the observed results." [0035] "In this way, the network administration device generates a predictive model using machine learning to determine updated path information for a network of network devices, which permits dynamic improvement and updating of the predictive model, and which may be simpler to implement than a complicated and/or static routing protocol. Furthermore, using a machine learning technique may generate a more efficient routing protocol than using a human-based technique. Further, the network administration device may avoid or limit traffic drops due to black-holing of traffic by misconfigured or malfunctioning network devices. This may be particularly advantageous for failure modes that are not detected by traditional routing protocols, such as network device degradation, black-holing, and/or the like. Also, the network administration device may continuously gather useful telemetry and/or performance information over time, which may permit analysis of network information over time" Yadav teaches that it can then update the generated model based on dynamically changing traffic information. The machine learning technique can iteratively update the model to improve the accuracy of the model and the data it represents.) However, Yadav does not teach that the domain in which the model is representing is part of a multi-domain network. Ganesh, in the same field of endeavor, teaches the domain being part of a multi-domain network ([0019] "It is another object of the present invention to provide system and method for monitoring multi-domain network, which generates visualization of different domains in a telecommunication network across layers and identifies the exact root cause of network element using the layered approach based on configuration, performance and alarm data collected across multiple domains."). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention to apply the teachings of Liron, with a method to generate a data flow model of a network/domain, and obtain the measured flow rates and expected flow rates pertaining to the connections of said generated model, with the teaching of Yadav to further update the model according to the rates obtained to be sure the model is accurate, and further with the teachings of Ganesh wherein the method is applied to a domain being part of a multi-domain network. One of ordinary skill in the art need only apply the model generation method taught by Liron, with the updating of a traffic model from Yadav, and further to apply it to a multidomain environment as taught by Ganesh to further increase the accuracy of the generated data flow model, even if given domain does not have an oracle view of other neighboring domains. All so one can generate an accurate representation of a network data flow model that may more efficiently show where bottlenecks occur. Regarding claim 2, Liron, Yadav, and Ganesh teach the method of claim 1 (discussed above). Liron further teaches performing data flow management based at least in part on the data flow model ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." The data collected for the optimization of the model includes traffic flows, and is therefore interpreted by the examiner as a form of data flow management based on the generated model taught by Liron and showcased in FIG. 2.). Regarding claim 3, Liron, Yadav, and Ganesh teach the method of claim 2 (discussed above). Liron further teaches wherein performing data flow management comprises performing network modeling ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." The optimization and adjustments made to the network model based on those optimizations are interpreted by the examiner as a form of network modeling.). Regarding claim 4, Liron, Yadav, and Ganesh teach the method of claim 1 (discussed above). However, Liron does not explicitly teach generating an updated data flow model based at least in part on iteratively updating the data flow model. Yadav, in the same field of endeavor, teaches generating an updated data flow model based at least in part on iteratively updating the data flow model ( [0015] "Furthermore, some implementations described herein may update the model using the machine learning technique and based on observations regarding efficacy of the configuration of the path information. In this way, the model may adapt to changing network conditions and topology (e.g., in real time as the network conditions and/or the topology change), which may require human intervention for a predefined routing policy. Thus, network throughput, reliability, and conformance with SLAs is improved. Further, some implementations described herein may use a rigorous, well-defined approach to path selection, which may reduce uncertainty, subjectivity, and inefficiency that may be introduced by a human actor attempting to define a routing policy based on observations regarding network performance." [0016] "Also, some implementations described herein may identify the best paths for traffic associated with different SLAs. Since these best paths may iteratively change based on traffic load and node behavior/faults, the machine learning component of implementations described herein may regularly reprogram the paths for particular traffic flows across the network domain. This reprogramming may be based on dynamic prediction of the traffic flows, traffic drops and delays. Thus, implementations described herein may improve adaptability and versatility of path computation for the network domain in comparison to a rigidly defined routing protocol." [0034] "As shown by reference number 138, the network administration device may compare the updated operational information and/or the updated flow information to the predicted performance information outputted by the path selection model. As shown by reference number 140, the network administration device may update the path selection model using machine learning and based on the comparison of the updated operational information and/or the updated flow information to the predicted performance information. For example, machine learning may provide a mechanism for dynamically or iteratively improving the path selection model in view of results of using the path selection model. When observed results deviate from predicted results, the network administration device may adjust the path selection model using a machine learning algorithm to improve accuracy of the predicted results to better match the observed results" Based on the above, Yadav teaches that flow information changes, and is therefore dynamic. The best paths in the model may “iteratively change” and the machine learning component can update the model based on this “updated flow information”. In order to keep the current model accurate, Yadav may then “dynamically or iteratively improving the path selection model in view of results of using the path selection model”. The examiner interprets this as generating an updated flow model based on iteratively updating the flow model. ). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention to apply the combination teachings of Liron, Yadav, and Ganesh, with the further teaching of Yadav wherein the model is iteratively updated. The motivation to do so would be to ensure an accurate data flow model is available. Data flows can be dynamic in which bottlenecks may occur slowing the flow. Having an updated model that reflects these changes would allow one to better allocate flow within a network/domain. Regarding claim 7, Liron, Yadav, and Ganesh teach the method of claim 1 (discussed above). Liron further teaches wherein the data flow model indicates one or more of bottleneck links or non-bottleneck links of one or more data flows or paths within the domain ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." The generated network model taught in Liron includes topology, traffic flows, links, etc. The examiner interprets these links to include "non-bottleneck links".). Regarding claim 8, Liron, Yadav, and Ganesh teach the method of claim 1 (discussed above). Liron further teaches wherein the set of expected flow rates is based at least in part on calculated capacities of links traversed by data flows or paths within the domain ([Column 13, Lines 27-43] "Tables 1 and 2 below illustrate the results of a hub tree optimization process 500 applied to an actual LAN segment 13 called "<134.177.120.9>" whose traffic flow is shown in FIG. 6. The optimization goal 31 was set to "minimize the maximum load". The LAN segment 13 started off with a load of 61 %. Optimization process 500 resulted in two segments partitioned at a hub 402 called "<134.177.120.9_hub9>." The resulting partitions now operate at 21 % and 42% of maximum segment capacity. That the sum of the two loads is above the original LAN segment load of 61 % indicates that some (very little) traffic is crossing between the two new segments. Decreasing a segment load from 61 % to 42% typically reduces the delay by a factor of about 10 because at a load of 61 % the segment operates much above the "knee" of the non-linear delay vs. load curve." Liron’s FIG. 6 showcases the bidirectional traffic flow from node to node in a network. Liron teaches that there is an expected flowrate between these nodes, further represented in node tables 1 and 2 that collects the flow information, and organizes the nodes by an expected capacity (Table 1 with an expected load of 21%, and table 2 with nodes that have an expected load of 42%).), and wherein the set of measured flow rates is based at least in part on an observed transmission rate of data flow through the links ([Column 6, Line 12-22] " Optimization process 200 determines 209 the amount of network traffic known to flow between the given clients 15 and R over a given time t. These traffic flow amounts are stored in TRAFFIC. TRAFFIC has C elements, C being the number of clients 15 in CLIENTS. The j-th entry in TRAFFIC comprises TRAFFIC[j, client_r] for the unidirectional traffic flow (for example in bytes) from CLIENTU] to R, and TRAFFIC[j, r_client] for the traffic from R to CLIENT[j]. TRAFFIC is formed from network model 27 using actual measurements, or through estimation, for example, using simulation by simulator 37." The "actual measurements" as taught by Liron is interpreted by the examiner as a measured flow rate of the observed traffic/transmission rates between nodes in the generated network model. This information is then stored in a list labeled as TRAFFIC and used to generate the network model.). Regarding claim 10, Liron teaches a network entity for wireless communication, comprising: a memory (FIG. 1 label #43); and one or more processors, coupled to the memory ([Column 4, Lines 60-62] “In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory.”), configured to: generate a data flow model for a domain associated with the network entity, ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." FIG. 2 step 103 states a network model is made according to collected network data, that includes traffic flows.), obtaining a set of measured flow rates and a set of expected flow rates that are calculated based at least in part on the data flow model ([Column 4, Line 36] "Referring to FIG. 2, optimization method 100 collects 101 data describing network 11, including its topology and traffic flow patterns. Data collection is accomplished in the preferred embodiment using segment collectors 17 on the LAN segments 13 of interest; information about other LAN segments 13 can be ignored if desired. These segment collectors 17 collect data local to their attached LAN segment 13. Some conventional networks 11 use segment collectors 17 (e.g. SNMP, RMON, network analyzers, etc.) which may relay the collected information to a network management console 23." The collection of data describing a network includes collecting traffic flow patterns. [Column 6, Lines 12-22] "Optimization process 200 determines 209 the amount of network traffic known to flow between the given clients 15 and R over a given time t. These traffic flow amounts are stored in TRAFFIC. TRAFFIC has C elements, C being the number of clients 15 in CLIENTS. The j-th entry in TRAFFIC comprises TRAFFIC[j, client.sub.-- r] for the unidirectional traffic flow (for example in bytes) from CLIENT[j] to R, and TRAFFIC[j, r.sub.-- client] for the traffic from R to CLIENT[j]. TRAFFIC is formed from network model 27 using actual measurements, or through estimation, for example, using simulation by simulator 37.” Known flow is interpreted by the examiner as expected flow. [Column 11, Lines 11-24] "which assigns a score dependent on the values of the traffic variables A, B, C. For example, Eq. (9) can define a function for scoring hub(p) based on the delays incurred when going through a switching element 19 between the partitions on LAN segment 13 at hub(p). The delays in and between partitions PA(p) and PB(p) are themselves functions of the traffic values A, B, and C. The scoring function can assign a score which is equal to the average byte delay depending on which of PA(p), PB(p) or the switching element 19 participate in transmitting the byte from its source node 16 to the destination node 16. The functions can be based on either actual network traffic and delay data acquired by segment collectors 17, or thorough simulation by simulator 39." Actual network data is interpreted by the examiner as the measured flow rates.); and However, Liron does not teach selectively updating the data flow model based at least in part on an accuracy of the data flow model, with the accuracy determined based at least in part on the set of measured flow rates and the set of expected flow rates, nor that the domain in which the model is representing is part of a multi-domain network. Yadav, in the same field of endeavor, teaches updating the data flow model based at least in part on an accuracy of the data flow model, with the accuracy determined based at least in part on the set of measured flow rates and the set of expected flow rates ([0004] "A non-transitory computer-readable medium storing instructions, the instructions comprising one or more instructions that, when executed by one or more processors of a network administration device, cause the one or more processors to receive first operational information regarding a first set of network devices; receive first flow information relating to a first set of traffic flows associated with the first set of network devices; generate a model, based on a machine learning technique, to identify predicted performance of the first set of network devices with regard to the first set of traffic flows; receive or obtain second operational information and/or second flow information regarding the first set of network devices or a second set of network devices; determine path information for the first set of traffic flows or a second set of traffic flows using the model and based on the second operational information and/or the second flow information; configure the first set of network devices or the second set of network devices to implement the path information; and/or update the model based on a machine learning technique and based on observations after the path information is implemented." Yadav teaches a method and device that receives information relating to the traffic flow between devices, and then generates a model using a machine learning technique. The examiner interprets this model based on traffic flow to be a network data flow model. [0018] "FIGS. 1A-1D are diagrams of an overview of example implementations 100 described herein. As shown in FIG. 1A, and by reference number 102, a network administration device (shown as NAD) may receive a training set of flow information, network topology information, and operational information regarding a plurality of network devices of a network. The network administration device may receive the flow information, the network topology information, and the operational information to generate a model for determining traffic flow paths in the network or another network." Paragraph 0018 further solidifies the examiner’s interpretation that the model is one representative of a network topology and the data flows within, i.e. a network data flow model. [0034] "As shown by reference number 138, the network administration device may compare the updated operational information and/or the updated flow information to the predicted performance information outputted by the path selection model. As shown by reference number 140, the network administration device may update the path selection model using machine learning and based on the comparison of the updated operational information and/or the updated flow information to the predicted performance information. For example, machine learning may provide a mechanism for dynamically or iteratively improving the path selection model in view of results of using the path selection model. When observed results deviate from predicted results, the network administration device may adjust the path selection model using a machine learning algorithm to improve accuracy of the predicted results to better match the observed results." [0035] "In this way, the network administration device generates a predictive model using machine learning to determine updated path information for a network of network devices, which permits dynamic improvement and updating of the predictive model, and which may be simpler to implement than a complicated and/or static routing protocol. Furthermore, using a machine learning technique may generate a more efficient routing protocol than using a human-based technique. Further, the network administration device may avoid or limit traffic drops due to black-holing of traffic by misconfigured or malfunctioning network devices. This may be particularly advantageous for failure modes that are not detected by traditional routing protocols, such as network device degradation, black-holing, and/or the like. Also, the network administration device may continuously gather useful telemetry and/or performance information over time, which may permit analysis of network information over time" Yadav teaches that it can then update the generated model based on dynamically changing traffic information. The machine learning technique can iteratively update the model to improve the accuracy of the model and the data it represents.) However, Yadav does not teach that the domain in which the model is representing is part of a multi-domain network. Ganesh, in the same field of endeavor, teaches the domain being part of a multi-domain network ([0019] "It is another object of the present invention to provide system and method for monitoring multi-domain network, which generates visualization of different domains in a telecommunication network across layers and identifies the exact root cause of network element using the layered approach based on configuration, performance and alarm data collected across multiple domains."). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention to apply the teachings of Liron, with a method to generate a data flow model of a network/domain, and obtain the measured flow rates and expected flow rates pertaining to the connections of said generated model, with the teaching of Yadav to further update the model according to the rates obtained to be sure the model is accurate, and further with the teachings of Ganesh wherein the method is applied to a domain being part of a multi-domain network. One of ordinary skill in the art need only apply the model generation method taught by Liron, with the updating of a traffic model from Yadav, and further to apply it to a multidomain environment as taught by Ganesh to further increase the accuracy of the generated data flow model, even if given domain does not have an oracle view of other neighboring domains. All so one can generate an accurate representation of a network data flow model that may more efficiently show where bottlenecks occur. Regarding claim 11, Liron, Yadav, and Ganesh teach the entity of claim 10 (discussed above). Liron further teaches wherein the one or more processors are further configured to: perform data flow management based at least in part on the data flow model ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." The data collected for the optimization of the model includes traffic flows, and is therefore interpreted by the examiner as a form of data flow management based on the generated model taught by Liron and showcased in FIG. 2.). Regarding claim 12, Liron, Yadav, and Ganesh teach the entity of claim 11 (discussed above). Liron further teaches wherein the one or more processors, to perform data flow management, are configured to perform network modeling ([Column 4, Lines 47-62] "Next, optimization method 100 builds 103 an overall network model 27 from local data collected by segment collectors 17, using consolidator 25. In the preferred embodiment, the input to consolidator 25 comes either directly from segment collectors 17 or indirectly from segment collectors 17 via network management console 23. In either case consolidator 25 produces a network model 27 of the portions of network 11 for which data was collected, including its network topology, the clients 15, switching devices 19, links 41, LAN segments 13, the network protocols employed, and the traffic flows. Alternatively, network model 27 can be based on data acquired from both segment collectors 27 and network management console 23. In the preferred embodiment, consolidator 25 is implemented as a processor executing software routines stored in memory." The optimization and adjustments made to the network model based on those optimizations are interpreted by the examiner as a form of network modeling.). Regarding claim 13, Liron, Yadav, and Ganesh teach the entity of claim 10 (discussed above). However, Liron does not explicitly teach wherein the one or more processors are further configured to: generating an updated data flow model based at least in part on iteratively updating the data flow model. Yadav, in the same field of endeavor, teaches generating an updated data flow model based at least in part on iteratively updating the data flow model ( [0015] "Furthermore, some implementations described herein may update the model using the machine learning technique and based on observations regarding efficacy of the configuration of the path information. In this way, the model may adapt to changing network conditions and topology (e.g., in real time as the network conditions and/or the topology change), which may require human intervention for a predefined routing policy. Thus, network throughput, reliability, and conformance with SLAs is improved. Further, some implementations described herein may use a rigorous, well-defined approach to path selection, which may reduce uncertainty, subjectivity, and inefficiency that may be introduced by a human actor attempting to define a routing policy based on observations regarding network performance." [0016] "Also, some implementations described herein may identify the best paths for traffic associated with different SLAs. Since these best paths may iteratively change based on traffic load and node behavior/faults, the machine learning component of implementations described herein may regularly reprogram the paths for particular traffic flows across the network domain. This reprogramming may be based on dynamic prediction of the traffic flows, traffic drops and delays. Thus, implementations described herein may improve adaptability and versatility of path computation for the network domain in comparison to a rigidly defined routing protocol." [0034] "As shown by reference number 138, the network administration device may compare the updated operational information and/or the updated flow
Read full office action

Prosecution Timeline

Oct 24, 2022
Application Filed
Oct 18, 2023
Response after Non-Final Action
May 01, 2025
Non-Final Rejection — §101, §103
Jun 13, 2025
Interview Requested
Jul 16, 2025
Response Filed
Mar 24, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month