DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to remarks filed 10/30/2025.
Claims 1, 2, 4-6, 8-10, 12-14, 16-18, and 20-25 are pending and presented for examination. Claims 7 and 15 are cancelled. Claims 1, 2, 4, 6, 8, 9, 10, 12, 13, 14, 16, 17, 18, and 20-23 are amended. Claims 24 and 25 are added.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 9, 10, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (S.C. Lin, I. F. Akyildiz, P. Wang and M. Luo, "QoS-Aware Adaptive Routing in Multi-layer Hierarchical Software Defined Networks: A Reinforcement Learning Approach," 2016 IEEE International Conference on Services Computing (SCC), San Francisco, CA, USA, 2016, pp. 25-33, doi: 10.1109/SCC.2016.12, hereinafter “Lin”), in view of Zhang et al. (US 20160218917 A1, hereinafter “Zhang”), in view of Klinker et al. (US 20020145981 A1, hereinafter “Klinker”), in view of Dechene et al. (US 20220247643 A1, hereinafter “Dechene”), in view of Koral et al (US 20200396163 A1, hereinafter “Koral”).
RE claim 1, Lin discloses:
A computer-implemented method comprising:
obtaining, at a super controller and from a plurality of software-defined network (SDN) controllers that are in a plurality of time zones and are controlled by the super controller (Multi-layer hierarchical control architecture in an SDN with a Super Controller and Domain (Master) Controllers. Pg. 26, Ln 97-102; Note that, the interaction between super controller, domain controllers, and slave controllers fulfills global flow setup and responds to every control action, including actions for switches’ or controllers’ failures, migrations, load-balancing, etc. Pg. 27, Ln. 6-9, Fig. 2a; Towards this, the logically centralized control plane with global visibility is established by a physically distributed system. Pg. 27, Ln. 13-14; Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes. Deployment example in a real backbone system of Spring GIP network in Fig. 2b. Red spots are slave controllers with a group of switches underneath. Blue devices are domain controllers serving more than one slave controller. Green devices are Super Controllers to supervise the entire system. Pg. 27, Ln. 17-23, Fig. 2b; Examiner interpreted domain controller as a SDN controller and an SDN with physical distribution across multiple time zones.)
data associated with hot-prefixes in the plurality of time zones, the data associated with the hot-prefixes being obtained at each of the SDN controllers from one or more network devices in a time zone corresponding to the SDN controller (OpenFlow enabled Switches identify traffic flows in terms of IP addresses and further enhanced by wildcard usage. Exam. Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes. The domain (or master) controllers, having full accesses to switches, receive asynchronous messages (e.g., Packet-IN [4]) for flow-setup requests and are capable to modify switches’ states by sending control messages. Pg. 26-27, Ln. 107-004; Note that, the interaction between super controller and domain controllers fulfills global flow setup and responds to every control action, including actions for switches’ or controllers’ failures, migrations, load-balancing, etc. Pg. 27, Ln. 6-9; Examiner interpreted domain controller as a SDN controller per limitation.),
the hot-prefixes being requested during a plurality of time intervals in the plurality of time zones, the hot-prefixes being associated with network addresses that are frequently requested during the plurality of time intervals (Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes.; Deployment example in a real backbone system of Spring GIP network in Fig. 2b. Red spots are slave controllers with a group of switches underneath. Blue devices are domain controllers serving more than one slave controller. Green devices are Super Controllers to supervise the entire system. Pg. 27, Ln. 17-23, Fig. 2b; Examiner interpreted domain controller as a SDN controller and an SDN with physical distribution across multiple time zones.);
Lin does not explicitly disclose, however Zhang discloses:
combining, by the super controller, the data associated with the hot-prefixes obtained from the plurality of SDN controllers in the plurality of time zones (Control plane device, super controller, performs joint inter-domain and intra-domain traffic engineering with location varying objectives. Embodiments can maximum link utilization of peering links in the interdomain and minimize link utilization internally to avoid congestion, hot prefix links. ¶¶0006, 0036, 0099; TE optimization process is a spatial multi-objective optimization, different locations such as time zones. ¶0046; A detailed traffic matrix can be estimated based on past traffic patterns. ¶0053; Virtual nodes with all address prefixes with similar QoS vector with continuous performance measurement of QoS metrics. ¶0060, Fig. 4C; Once the joint representation of the interdomain and intradomain traffic demand has been computed, then the process of determining a set of candidate paths for all source-destination (SD) pairs can be determined in the network using this joint representation (Block 603). ¶0124)
Lin and Zhang do not explicitly disclose, however Klinker discloses:
predicting, by the super controller and based on the combined data associated with the hot-prefixes (Correlator, Fig. 6, generates aggregate service level statistics for each group of flows which go to the same destination in the network or Internet, prefixes. Traffic flow characteristics (or traffic profiles) are then used for future statistical manipulation and flow prediction. ¶0094, Fig. 6; Controller of Fig. 2 and Fig. 8 includes Fig. 20, a super controller, where receives information of all routing updates in the network. Event scheduler shares info with Episode Manager to communicate routing changes to local route server. An episode occurs when the routing in place cannot achieve minimum service level to a given prefix. ¶¶0141-0142, Fig. 20;), prefixes that will become hot-prefixes during a future second time interval in a second time zone of the plurality of times zones to determine predicted hot-prefixes (If the prediction algorithms, such as neural networks, determine that a particular path in use will have poor performance over an upcoming period, the network control element (i.e., controller) can take proactive action to change the path before the upcoming service degradation. ¶0063; Embodiment for maintaining traffic service level over at least two networks where traffic flows through an interconnection point. A first regional network and second regional network with a central route server that provides modified routing tables to servers in each regional network. ¶0017; Parent central route service coupled to one or more child central route servers coupled to one or more regions. ¶0123, Fig. 13; Controller of Fig. 2 and Fig. 8 includes Fig. 20, where receives information of all routing updates in the network. Event scheduler shares info with Episode Manager to communicate routing changes to local route server. An episode occurs when the routing in place cannot achieve minimum service level to a given prefix. ¶¶0141-0142, Fig. 20;)
Lin, Zhang, and Klinker do not explicitly disclose, however Dechene discloses:
transmitting, by the super controller and to a SDN controller of the plurality of SDN controllers in the time zone, an indication of the predicted hot-prefixes and an indication of the future time interval (For example, two data centers in the western US may be their own independent first-order clusters of nodes and links. A second-order hierarchical cluster can include both of those western data center objects, while there is a different cluster for data centers from the eastern US, different time zones. ¶0104; Fig. 9; In a number of embodiments, decentralized network control system deployment model 1330 can utilize a central SDN controller 1331 and local SDN controllers 1332-1334. Central SDN controller 1331 can include a central agent and a monitor. Local SDN controllers 1332-1334 can be used locally within each domain of SDN nodes, such as local SDN controller 1332 for nodes 1335, local SDN controller 1333 for nodes 1337, and local SDN controller 1334 for nodes 1336. Central SDN controller 1331 still behaves as a central authority. ¶0141, Fig. 13; In several embodiments, routing within the SDN control service is performed based on flows defined by a source address, a destination address, and a datagram classification tuple. ¶0208; Proactive flow programming determines the entire path of nodes for the flow for the SDN control service. Based on previous route lookups recorded by the routing agent and historical traffic data captured by the monitor service, a prediction can be made as to which flows are most relevant to a node for a given time period. Network control system 315 (FIG. 3) can then program the predicted flow entries before they are requested. The predictive model also can be built from training system 320 (FIG. 3) using synthetic traffic flow data on the digital twin. Predictive entries can be selected from all available candidate entries based on their modeled frequency and criticality of use for a given time of operation. ¶¶0148-0149, Fig. 3);
transmitting, by the SDN controller in the time zone, the indication of the predicted hot- prefixes and the indication of the future time interval to a plurality of network devices configured to provide networking services in the time zone prior to a start of the future time interval in the time zone (For example, two data centers in the western US may be their own independent first-order clusters of nodes and links. A second-order hierarchical cluster can include both of those western data center objects, while there is a different cluster for data centers from the eastern US, different time zones. ¶0104; Fig. 9; In a number of embodiments, decentralized network control system deployment model 1330 can utilize a central SDN controller 1331 and local SDN controllers 1332-1334. Central SDN controller 1331 can include a central agent and a monitor. Local SDN controllers 1332-1334 can be used locally within each domain of SDN nodes, such as local SDN controller 1332 for nodes 1335, local SDN controller 1333 for nodes 1337, and local SDN controller 1334 for nodes 1336. Central SDN controller 1331 still behaves as a central authority. ¶0141, Fig. 13; In several embodiments, routing within the SDN control service is performed based on flows defined by a source address, a destination address, and a datagram classification tuple. ¶0208; Proactive flow programming determines the entire path of nodes for the flow for the SDN control service. Based on previous route lookups recorded by the routing agent and historical traffic data captured by the monitor service, a prediction can be made as to which flows are most relevant to a node for a given time period. Network control system 315 (FIG. 3) can then program the predicted flow entries before they are requested. The predictive model also can be built from training system 320 (FIG. 3) using synthetic traffic flow data on the digital twin. Predictive entries can be selected from all available candidate entries based on their modeled frequency and criticality of use for a given time of operation. ¶¶0148-0149, Fig. 3); and
Lin, Zhang, Klinker and Dechene do not explicitly disclose, however Koral discloses:
preloading, based on the indication of the predicted hot-prefixes and the indication of the future time interval (SDN controller manages and deploys subsets of prefixes, hot prefixes based on prefix and associated volume of traffic ¶¶0008-0010, to the network devices, routers. ¶0029; Best prefixes based on usage. Different routers have will have different prefixes depending on density and traffic which vary over time. Prefix analyzer, part of SDN controller, determines best prefixes, predictions by machine learning, and control loads into the router prefix table, each network device, router, is distinctly controlled depending on the time interval of the subset of prefixes updates. ¶¶0029-0030; Fig. 2, 3), the predicted hot-prefixes into protected hardware accelerated portions of a plurality of routers in the second zone prior to the start of the future time interval (Two or more levels of separated, memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Fast memories, e.g. TCAMs, ¶0026, with accelerated memory and speed, of the network device, router, are loaded with a subset of prefixes, predicted hot prefixes, into the forwarding table. A plurality of network device controllers and a plurality of network device forwarders, routers, exist in the network. ¶¶0005, 0028; Prefix list built on past traffic matrix based on time of day or week. In addition, the construction of the prefix list employs router locations for machine learning. SDN controller assigns a prefix set to network devices, routers, in different locations, time zone based on physical location. ¶0031), wherein the hot-prefixes in the protected hardware accelerated portions are protected from catastrophic network events (Controller memory of network device, router, contains routing information base, RIB, as known as IP routing table stored in memory. Prefix table, forwarding information base FIB, stored in memory, e.g. SRAM or DRAM. ¶0027, Fig., 1: 105, 108; Network device forwarder equipped with fast, accelerated, memory such as TCAM. ¶0028, Fig. 1: 103; Network device stores the subset of prefixes, hot prefixes, in the routing table. ¶0037, Fig. 3; Network outages may cause issues in configuring single entire IP prefix forwarding list due to over allocation of memory resources. ¶0006; Two or more levels of memory in a router, faster and slower. ¶0004; Using a subset of prefixes, hot prefixes, is a smaller list saving on memory requirements while using fast memory, smaller and accelerated, prefix list. ¶¶0039-0040), and wherein traffic associated with the hot-prefixes in the protected hardware accelerated portions is routed in an accelerated manner and with a greater bandwidth than traffic associated with prefixes in other portions of the plurality of routers (Two or more levels of separated memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Smaller memory space to store subset of prefixes, hot prefixes, into an accelerated, TCAM, faster memory. ¶¶0028-0029; Faster memory prefix list is based on top most used prefixes, hot prefixes associated with the most traffic, greater bandwidth. ¶0039, Fig. 4; The non hot prefixes, lower traffic that hot prefix attributes, are stored in the slower memory given lower traffic usage, lower bandwidth, and not in the subset of prefixes, hot prefixes. ¶¶0038, 0040, Fig. 3, 5).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040)
RE claim 2, Lin, Zhang, Klinker and Dechene do not explicitly disclose:
The computer-implemented method, wherein the SDN controller preloads the predicted hot-prefixes into the protected hardware accelerated portions of the plurality of routers prior to the start of the future time interval.
However, Koral discloses:
The computer-implemented method, wherein the SDN controller preloads the predicted hot-prefixes into the protected hardware accelerated portions of the plurality of routers prior to the start of the future time interval (SDN controller manages and deploys subsets of prefixes, hot prefixes based on prefix and associated volume of traffic ¶¶0008-0010, to the network devices, routers. ¶0029; Best prefixes based on usage. Different routers have will have different prefixes depending on density and traffic which vary over time. Prefix analyzer, part of SDN controller, determines best prefixes, predictions by machine learning, and control loads into the router prefix table, each network device, router, is distinctly controlled depending on the time interval of the subset of prefixes updates. ¶¶0029-0030; Fig. 2, 3; Two or more levels of separated, memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Fast memories, e.g. TCAMs, ¶0026, with accelerated memory and speed, of the network device, router, are loaded with a subset of prefixes, predicted hot prefixes, into the forwarding table. A plurality of network device controllers and a plurality of network device forwarders, routers, exist in the network. ¶¶0005, 0028; SDN controller assigns a prefix set to network devices, routers, in different locations, time zone based on physical location. ¶0031).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040)
RE claim 9, 17 Lin discloses an apparatus or computer media:
obtaining, at a super controller and from a plurality of software-defined network (SDN) controllers that are in a plurality of time zones and are controlled by the super controller (Multi-layer hierarchical control architecture in an SDN with a Super Controller and Domain (Master) Controllers. Pg. 26, Ln 97-102; Note that, the interaction between super controller, domain controllers, and slave controllers fulfills global flow setup and responds to every control action, including actions for switches’ or controllers’ failures, migrations, load-balancing, etc. Pg. 27, Ln. 6-9, Fig. 2a; Towards this, the logically centralized control plane with global visibility is established by a physically distributed system. Pg. 27, Ln. 13-14; Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes. Deployment example in a real backbone system of Spring GIP network in Fig. 2b. Red spots are slave controllers with a group of switches underneath. Blue devices are domain controllers serving more than one slave controller. Green devices are Super Controllers to supervise the entire system. Pg. 27, Ln. 17-23, Fig. 2b; Examiner interpreted domain controller as a SDN controller and an SDN with physical distribution across multiple time zones.)
data associated with hot-prefixes in the plurality of time zones, the data associated with the hot-prefixes being obtained at each of the SDN controllers from one or more network devices in a time zone corresponding to the SDN controller (OpenFlow enabled Switches identify traffic flows in terms of IP addresses and further enhanced by wildcard usage. Exam. Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes. The domain (or master) controllers, having full accesses to switches, receive asynchronous messages (e.g., Packet-IN [4]) for flow-setup requests and are capable to modify switches’ states by sending control messages. Pg. 26-27, Ln. 107-004; Note that, the interaction between super controller and domain controllers fulfills global flow setup and responds to every control action, including actions for switches’ or controllers’ failures, migrations, load-balancing, etc. Pg. 27, Ln. 6-9; Examiner interpreted domain controller as a SDN controller per limitation.),
the hot-prefixes being requested during a plurality of time intervals in the plurality of time zones, the hot-prefixes being associated with network addresses that are frequently requested during the plurality of time intervals (Each OF switch needs to send the control purpose traffic, such as the route setup requests for new flows and real-time network congestion status to the SDN controller. Pg. 26, Ln. 34-44; Examiner interpreted congestion flow status identified by IP address with wildcard usage as hot-prefixes.; Deployment example in a real backbone system of Spring GIP network in Fig. 2b. Red spots are slave controllers with a group of switches underneath. Blue devices are domain controllers serving more than one slave controller. Green devices are Super Controllers to supervise the entire system. Pg. 27, Ln. 17-23, Fig. 2b; Examiner interpreted domain controller as a SDN controller and an SDN with physical distribution across multiple time zones.);
Lin does not explicitly disclose, however Zhang discloses:
a memory (¶0125);
a network interface configured to enable network communication (network interface cards, ¶0155, Fig. 7D: 776); and
a plurality of processors, wherein the plurality of processors are configured to perform operations comprising (one or more processors. ¶0155, Fig. 7D:776):
combining, by the super controller, the data associated with the hot-prefixes obtained from the plurality of SDN controllers in the plurality of time zones (Control plane device, super controller, performs joint inter-domain and intra-domain traffic engineering with location varying objectives. Embodiments can maximum link utilization of peering links in the interdomain and minimize link utilization internally to avoid congestion, hot prefix links. ¶¶0006, 0036, 0099; TE optimization process is a spatial multi-objective optimization, different locations such as time zones. ¶0046; A detailed traffic matrix can be estimated based on past traffic patterns. ¶0053; Virtual nodes with all address prefixes with similar QoS vector with continuous performance measurement of QoS metrics. ¶0060, Fig. 4C; Once the joint representation of the interdomain and intradomain traffic demand has been computed, then the process of determining a set of candidate paths for all source-destination (SD) pairs can be determined in the network using this joint representation (Block 603). ¶0124)
Lin and Zhang do not explicitly disclose, however Klinker discloses:
predicting, by the super controller and based on the combined data associated with the hot-prefixes (Correlator, Fig. 6, generates aggregate service level statistics for each group of flows which go to the same destination in the network or Internet, prefixes. Traffic flow characteristics (or traffic profiles) are then used for future statistical manipulation and flow prediction. ¶0094, Fig. 6; Controller of Fig. 2 and Fig. 8 includes Fig. 20, a super controller, where receives information of all routing updates in the network. Event scheduler shares info with Episode Manager to communicate routing changes to local route server. An episode occurs when the routing in place cannot achieve minimum service level to a given prefix. ¶¶0141-0142, Fig. 20;), prefixes that will become hot-prefixes during a future second time interval in a second time zone of the plurality of times zones to determine predicted hot-prefixes (If the prediction algorithms, such as neural networks, determine that a particular path in use will have poor performance over an upcoming period, the network control element (i.e., controller) can take proactive action to change the path before the upcoming service degradation. ¶0063; Embodiment for maintaining traffic service level over at least two networks where traffic flows through an interconnection point. A first regional network and second regional network with a central route server that provides modified routing tables to servers in each regional network. ¶0017; Parent central route service coupled to one or more child central route servers coupled to one or more regions. ¶0123, Fig. 13; Controller of Fig. 2 and Fig. 8 includes Fig. 20, where receives information of all routing updates in the network. Event scheduler shares info with Episode Manager to communicate routing changes to local route server. An episode occurs when the routing in place cannot achieve minimum service level to a given prefix. ¶¶0141-0142, Fig. 20;)
Lin, Zhang, and Klinker do not explicitly disclose, however Dechene discloses:
transmitting, by the super controller and to a SDN controller of the plurality of SDN controllers in the time zone, an indication of the predicted hot-prefixes and an indication of the future time interval (For example, two data centers in the western US may be their own independent first-order clusters of nodes and links. A second-order hierarchical cluster can include both of those western data center objects, while there is a different cluster for data centers from the eastern US, different time zones. ¶0104; Fig. 9; In a number of embodiments, decentralized network control system deployment model 1330 can utilize a central SDN controller 1331 and local SDN controllers 1332-1334. Central SDN controller 1331 can include a central agent and a monitor. Local SDN controllers 1332-1334 can be used locally within each domain of SDN nodes, such as local SDN controller 1332 for nodes 1335, local SDN controller 1333 for nodes 1337, and local SDN controller 1334 for nodes 1336. Central SDN controller 1331 still behaves as a central authority. ¶0141, Fig. 13; In several embodiments, routing within the SDN control service is performed based on flows defined by a source address, a destination address, and a datagram classification tuple. ¶0208; Proactive flow programming determines the entire path of nodes for the flow for the SDN control service. Based on previous route lookups recorded by the routing agent and historical traffic data captured by the monitor service, a prediction can be made as to which flows are most relevant to a node for a given time period. Network control system 315 (FIG. 3) can then program the predicted flow entries before they are requested. The predictive model also can be built from training system 320 (FIG. 3) using synthetic traffic flow data on the digital twin. Predictive entries can be selected from all available candidate entries based on their modeled frequency and criticality of use for a given time of operation. ¶¶0148-0149, Fig. 3);
transmitting, by the SDN controller in the time zone, the indication of the predicted hot- prefixes and the indication of the future time interval to a plurality of network devices configured to provide networking services in the time zone prior to a start of the future time interval in the time zone (For example, two data centers in the western US may be their own independent first-order clusters of nodes and links. A second-order hierarchical cluster can include both of those western data center objects, while there is a different cluster for data centers from the eastern US, different time zones. ¶0104; Fig. 9; In a number of embodiments, decentralized network control system deployment model 1330 can utilize a central SDN controller 1331 and local SDN controllers 1332-1334. Central SDN controller 1331 can include a central agent and a monitor. Local SDN controllers 1332-1334 can be used locally within each domain of SDN nodes, such as local SDN controller 1332 for nodes 1335, local SDN controller 1333 for nodes 1337, and local SDN controller 1334 for nodes 1336. Central SDN controller 1331 still behaves as a central authority. ¶0141, Fig. 13; In several embodiments, routing within the SDN control service is performed based on flows defined by a source address, a destination address, and a datagram classification tuple. ¶0208; Proactive flow programming determines the entire path of nodes for the flow for the SDN control service. Based on previous route lookups recorded by the routing agent and historical traffic data captured by the monitor service, a prediction can be made as to which flows are most relevant to a node for a given time period. Network control system 315 (FIG. 3) can then program the predicted flow entries before they are requested. The predictive model also can be built from training system 320 (FIG. 3) using synthetic traffic flow data on the digital twin. Predictive entries can be selected from all available candidate entries based on their modeled frequency and criticality of use for a given time of operation. ¶¶0148-0149, Fig. 3); and
Lin, Zhang, Klinker and Dechene do not explicitly disclose, however Koral discloses:
preloading, based on the indication of the predicted hot-prefixes and the indication of the future time interval (SDN controller manages and deploys subsets of prefixes, hot prefixes based on prefix and associated volume of traffic ¶¶0008-0010, to the network devices, routers. ¶0029; Best prefixes based on usage. Different routers have will have different prefixes depending on density and traffic which vary over time. Prefix analyzer, part of SDN controller, determines best prefixes, predictions by machine learning, and control loads into the router prefix table, each network device, router, is distinctly controlled depending on the time interval of the subset of prefixes updates. ¶¶0029-0030; Fig. 2, 3), the predicted hot-prefixes into protected hardware accelerated portions of a plurality of routers in the second zone prior to the start of the future time interval (Two or more levels of separated, memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Fast memories, e.g. TCAMs, ¶0026, with accelerated memory and speed, of the network device, router, are loaded with a subset of prefixes, predicted hot prefixes, into the forwarding table. A plurality of network device controllers and a plurality of network device forwarders, routers, exist in the network. ¶¶0005, 0028; Prefix list built on past traffic matrix based on time of day or week. In addition, the construction of the prefix list employs router locations for machine learning. SDN controller assigns a prefix set to network devices, routers, in different locations, time zone based on physical location. ¶0031), wherein the hot-prefixes in the protected hardware accelerated portions are protected from catastrophic network events (Controller memory of network device, router, contains routing information base, RIB, as known as IP routing table stored in memory. Prefix table, forwarding information base FIB, stored in memory, e.g. SRAM or DRAM. ¶0027, Fig., 1: 105, 108; Network device forwarder equipped with fast, accelerated, memory such as TCAM. ¶0028, Fig. 1: 103; Network device stores the subset of prefixes, hot prefixes, in the routing table. ¶0037, Fig. 3; Network outages may cause issues in configuring single entire IP prefix forwarding list due to over allocation of memory resources. ¶0006; Two or more levels of memory in a router, faster and slower. ¶0004; Using a subset of prefixes, hot prefixes, is a smaller list saving on memory requirements while using fast memory, smaller and accelerated, prefix list. ¶¶0039-0040), and wherein traffic associated with the hot-prefixes in the protected hardware accelerated portions is routed in an accelerated manner and with a greater bandwidth than traffic associated with prefixes in other portions of the plurality of routers (Two or more levels of separated memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Smaller memory space to store subset of prefixes, hot prefixes, into an accelerated, TCAM, faster memory. ¶¶0028-0029; Faster memory prefix list is based on top most used prefixes, hot prefixes associated with the most traffic, greater bandwidth. ¶0039, Fig. 4; The non hot prefixes, lower traffic that hot prefix attributes, are stored in the slower memory given lower traffic usage, lower bandwidth, and not in the subset of prefixes, hot prefixes. ¶¶0038, 0040, Fig. 3, 5).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040)
RE claim 10, Lin, Zhang, Klinker and Dechene do not explicitly disclose, however Koral discloses:
The apparatus, wherein SDN controller preloads the predicted hot-prefixes into the protected hardware accelerated portions of the plurality of routers prior to the start of the future time interval (SDN controller manages and deploys subsets of prefixes, hot prefixes based on prefix and associated volume of traffic ¶¶0008-0010, to the network devices, routers. ¶0029; Best prefixes based on usage. Different routers have will have different prefixes depending on density and traffic which vary over time. Prefix analyzer, part of SDN controller, determines best prefixes, predictions by machine learning, and control loads into the router prefix table, each network device, router, is distinctly controlled depending on the time interval of the subset of prefixes updates. ¶¶0029-0030; Fig. 2, 3; Two or more levels of separated, memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Fast memories, e.g. TCAMs, ¶0026, with accelerated memory and speed, of the network device, router, are loaded with a subset of prefixes, predicted hot prefixes, into the forwarding table. A plurality of network device controllers and a plurality of network device forwarders, routers, exist in the network. ¶¶0005, 0028; SDN controller assigns a prefix set to network devices, routers, in different locations, time zone based on physical location. ¶0031).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040).
RE claim 18, Lin, Zhang, Klinker and Dechene do not explicitly disclose, however Koral discloses:
The one or more non-transitory computer readable storage media, wherein the SDN controller preloads the predicted hot-prefixes into the protected hardware accelerated portions of the plurality of routers prior to the start of the future time interval.
However, Koral discloses:
The computer-implemented method, wherein the SDN controller preloads preload the predicted hot-prefixes into the protected hardware accelerated portions of the plurality of routers prior to the start of the future time interval (SDN controller manages and deploys subsets of prefixes, hot prefixes based on prefix and associated volume of traffic ¶¶0008-0010, to the network devices, routers. ¶0029; Best prefixes based on usage. Different routers have will have different prefixes depending on density and traffic which vary over time. Prefix analyzer, part of SDN controller, determines best prefixes, predictions by machine learning, and control loads into the router prefix table, each network device, router, is distinctly controlled depending on the time interval of the subset of prefixes updates. ¶¶0029-0030; Fig. 2, 3; Two or more levels of separated, memory in a router, faster and slower. ¶0004; Subset of prefixes, hot prefixes, at any given time are loaded by the SDN controller into various prefix tables of various routers containing a smaller memory space. ¶0030; Fast memories, e.g. TCAMs, ¶0026, with accelerated memory and speed, of the network device, router, are loaded with a subset of prefixes, predicted hot prefixes, into the forwarding table. A plurality of network device controllers and a plurality of network device forwarders, routers, exist in the network. ¶¶0005, 0028; SDN controller assigns a prefix set to network devices, routers, in different locations, time zone based on physical location. ¶0031).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040).
Claims 4-6, 8, 12-14, 16, and 20-25 are rejected under 35 U.S.C. 103 as being unpatentable over Lin, in view of Zhang, in view of Klinker, in view of Dechene, in view of Koral, in view of Zilberman et al. (US-20210377131-A1, hereinafter “Zilberman”).
RE Claim 4, 12, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method or apparatus:
wherein the hot-prefixes are identified by the one or more network devices as hot-prefixes that are at maximum usage (¶94, extract max overflow data) during the first time interval (¶32, data collected in multivariate time series) in the first time zone (¶12, “networks at multiple locations (spatially distributed interconnected content sources)” as time zones are spatially separated).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine hot-prefixes at maximum usage.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE claim 20, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses:
The one or more non-transitory computer readable storage media, wherein the hot-prefixes are identified by the one or more network devices as hot-prefixes that are at maximum usage (¶94, extract max overflow data) during a particular interval (¶32, data collected in multivariate time series). (¶12, “networks at multiple locations (spatially distributed interconnected content sources)” as time zones are spatially separated)
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine hot-prefixes at maximum usage.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE Claims 5, 13, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method or apparatus:
wherein the plurality of processors (¶40 processors executing learning models) are further configured to perform operations comprising:
analyzing data patterns associated with network communications with the hot-prefixes(¶30, monitoring volume of data and the traffic paths),
wherein predicting the prefixes that will become hot-prefixes further comprises predicting the prefixes that will become hot-prefixes based on analyzing the data patterns (¶34, overflow prediction by deep learning models).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine predicted hot-prefixes by monitoring traffic volume and an associated path over time for input to learning models.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE Claims 6, 14, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method or apparatus:
wherein the plurality of processors (¶40 processors executing learning models) are further configured to perform operations comprising:
adjusting configurations of a network associated with a network device of the one or more network devices based on analyzing the data patterns (¶¶78-79, “react to an overflow event before performance degrades” , ¶55 “give an alert to the ISPs sufficiently in advance about a situation of the Data Over Flow and the alternative channels that should be operated in this situation, thereby enabling the ISPs to select the content provider, through the channels of which to transmit the information.”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine predicted hot-prefixes by monitoring traffic volume and an associated path over time for input to learning models.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE Claims 8, 16, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method or apparatus:
wherein the future time interval in the time zone corresponds to a time interval in another time zone during which the predicted hot-prefixes were hot-prefixes (¶14, high traffic at certain hours of day, ¶90, “At time T, given hourly sampled traffic overflow volume”, “Per handover prediction: The over-flow prone traffic amount at time T+h for a certain handover”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine hot-prefixes at maximum usage.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE Claims 21, 22, 23, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method, apparatus or computer media:
wherein the data associated with the hot-prefixes include an indication (¶0035) of a time zone (¶12, “networks at multiple locations (spatially distributed interconnected content sources)” as time zones are spatially separated) and a time interval (¶32, data collected in multivariate time series) during which the hot-prefixes are hot-prefixes (¶14, high traffic at certain hours of day, ¶¶87, 90, “At time T, given hourly sampled traffic overflow volume”, “Per handover prediction: The over-flow prone traffic amount at time T+h for a certain handover”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine hot-prefixes at maximum usage.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
RE Claim 24, 25, Lin, Zhang, Klinker, Dechene and Koral do not explicitly disclose, however Zilberman discloses a method or apparatus:
wherein the hot-prefixes carry above a threshold percentage of traffic in a routing table (In order to obtain actionable insights from the traffic volume forecasting results, the problem of traffic overflow prediction was defined. A threshold was defined for each overflow-prone series, for which traffic volume above this threshold is considered an “overflow”, a hot prefix, and traffic volume below this threshold is not. ¶0133).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lin, a SDN comprising a Super Controller and SDN controllers across regions, different time zones, and identify congestion in network, with the teachings of Zhang, combing network traffic data for spatial multi-objective optimization, with the teachings of Klinker, prediction that a particular path will have poor performance in an upcoming time period, with the teachings of Dechene, proactively distributing the predicted flow programming before the next time interval, and with the teachings of Koral, preload predicted prefixes into different levels of memory based on prefix usage attributes depending on time to each router in a network, with the teachings of Zilberman, determine hot-prefixes at maximum usage and establish a dynamic or static threshold to define overflow state.
The motivation in doing so would be to support an SDN built upon a master SDN controller managing multiple children SDN controllers to improve network traffic load balance across regions based on assigning prefixes based on high traffic usage for prediction of future traffic loads. In the prediction, high traffic loads are routed through higher bandwidth routers to mitigate overflow conditions in the network. Separating prefixes into high and low usage lists and storing hot prefixes in faster but smaller memory, hardware accelerated routers, further saves on recovery time, memory resources and costs. (Lin: Abstract, Pg. 26 Ln. 90 to Pg. 27 Ln. 23, Fig. 2a, 2b; Zhang: Abstract, ¶¶0001-0004, 0005-0007, 0029-0031; Klinker: Abstract: ¶¶0001-0005, 0009-0012, 0013-0014; Dechene: Abstract, ¶¶0002-0004, 0059, 0137-0142; Koral: Abstract, ¶¶0004-0012, 0028-0030, 0038-0040; Zilberman: Abstract, ¶¶0007-0008, 0014, 0016, 0019;)
Response to Arguments
Applicant’s arguments with respect to claims 1, 9, and 17have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
E. Rapaport, I. Poese, P. Zilberman, O. Holschke and R. Puzis, "Spillover Today? Predicting Traffic Overflows on Private Peering of Major Content Providers," in IEEE Transactions on Network and Service Management, vol. 18, no. 4, pp. 4169-4182, Dec. 2021, doi: 10.1109/TNSM.2021.3126643. (Year: 2021) Downloaded 02/09/2026 via: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9609013
Intelligent Efficiency For Data Centres & Wide Area Networks Report Prepared for IEA-4E EDNA May 2019, retrieved 02-09-2026 via https://www.iea-4e.org/wp-content/uploads/publications/2019/05/A1b_-_DC_WAN_V1.0.pdf (Year: 2019)
Advait Abhay Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, and Ramana Kompella. 2014. ElastiCon: an elastic distributed sdn controller. In Proceedings of the tenth ACM/IEEE symposium on Architectures for networking and communications systems (ANCS '14). Association for Computing Machinery, New York (Year: 2014)
US 20150188837 A1 Djukic et al.
US 20150281066 A1 Koley et al.
US 20210314235 A1 Chandrashekar et al.
WO-2015154483-A1 Yu et al.
US-20150244617-A1 Nakil et al.
US-20150188767-A1 Li et al.
US-20150200859-A1 Li et al.
US 20200067851 A1 Yigit et al.
The above references disclose various aspects of SDN and methods for intra-domain and inter-domain problem flows and solutions.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL A. LANGER whose telephone number is (703)756-1780. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm, Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at 1 (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAUL A. LANGER/Examiner, Art Unit 2419
/Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419