DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office correspondence is in response to the application number 18/940829 filed on November 7, 2024.
Claims 1 – 20 are pending.
Authorization for Internet Communications
The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03):
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439.
Priority
The instant application claims benefit to U.S. Provisional Application Number 63/596,950, filed on November 7, 2023, in accordance with 35 USC 111(b). The benefit to the provisional application is in accordance with 37 C.F.R 1.78 such that the applicant is entitled to a priority date of 11/7/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on November 07, 2024 was filed on or after the mailing date of the application on November 07, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting Analysis
The applicant has filed application 18/591960 (US 2024/0205094) which is co-pending with the instant application and names the assignee in common, and is directed to similar subject matter as the instant application. At this time of examination, the instant application appears to claim only subject matter directed to an invention that is independent and distinct from that claimed in the co-pending application. Therein, no non-statutory Double Patenting rejections have been applied. The applicant is required to maintain a clear line of demarcation between the applications during prosecution, as the Double Patenting analysis can be revisited if the claims of the instant application and the co-pending application converge to claiming the same subject matter. The applicant may wish to proactively file a terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) to overcome possible future Double Patenting rejections.
35 USC § 101 Analysis – Judicial Exception
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The claimed invention is directed to statutory subject matter and are not rejected under 35 USC 101 because of a judicial exception. The claimed subject matter is integrated into a practical application under Prong 2 of the Step 2A analysis described in MPEP 2016.04(d). The claims are directed to non-abstract improvements in computer related technology. A claim is non-statutory when it is directed to a judicial exception (e.g. either one of mathematical concepts, mental processes, or certain methods of organizing human activity) without significantly more. The claimed invention is not directed to a judicial exception. Instead, the claimed invention is directed to a technological improvement for performing network management operations for traffic communicate with an application based on application configurations or state received from an application orchestration system that collects application configuration and state information and uses that information to determine network actions or operations. The claimed invention recites executing a plurality of watchers that are configured to obtain different types of application configuration or state data, the steps include obtaining, using a first watcher, a first type of application configuration or state data from the application, and obtaining, using a second watcher, a second type of application configuration or state data from the application. Additionally, further determining, using the first type of application configuration or state data, a first network operation to perform in the network, and determining, using the second type of application configuration or state data, a second network operation to perform in the network. Therein, the steps include causing the first network operation and the second network operation to be performed in the network such that a configuration and/or state of the network is modified. The ordered combination of the elements and limitations bound the claimed invention to a specific and useful improvement for network management and efficient use of resources during operations. Therein the claimed invention is statutory under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rolia et al. (U.S. 2018/0097876 A1; herein referred to as Rolia) in view of Smith et al. (U.S. 2024/0205165 A1; herein referred to as Smith)
In regard to claim 1, Rolia teaches A network orchestrator (e.g. heavy node) that manages a network and executes an application watcher system (see Fig. 1, ¶ [0002]” . . . FIG. 1 illustrates a schematic diagram of an example overlay network system including a heavy node with an application orchestrator and applications components with overlay network managers in accordance with an aspect of this disclosure. . . .”), the network orchestrator comprising:
one or more processors (see Fig. 7, ¶ [0051]” . . . The processor platform 700 of the illustrated example of FIG. 7 includes a processor 712. The processor 712 of the illustrated example is hardware. For example, the processor 712 can be implemented by at least one integrated circuit, logic circuit, microprocessor or controller from any desired family or manufacturer. . . .”); and
one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors (see (see Fig. 7, ¶ [0057-0058]” . . . he processor platform 700 of the illustrated example also includes at least one mass storage device 728 for storing executable instructions (e.g., software) and/or data. Examples of such mass storage device(s) 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The coded instructions 732 of FIGS. 5 and/or 6 may be stored in the mass storage device 728, in the local memory 713 in the volatile memory 714, in the non-volatile memory 716, and/or on a removable tangible machine readable storage medium such as a CD or DVD. . . “), cause the one or more processors to perform operations comprising :
executing a plurality of watchers (e.g. light nodes 130 as shown in Fig. 1), wherein each of the plurality of watchers are configured to obtain different types of application configuration (see Fig. 1 ¶ [0016] “ . . . the heavy node 102 is a computing device (e.g., a server, a computer, etc.) that hosts the example application components 110 (e.g., application services, such as, communication services, location services, financial services, database services, analytics services, etc.). In some examples, the heavy node 102 may be located on a vehicle (e.g., a car, a boat, an airplane, etc.). The heavy node 102 is in communication with a plurality of light nodes 130. The example light nodes 130 may be computing devices that utilize and/or provide data/information for the application components 110. The example light nodes 130 may be Internet of Things (IoT) devices, such as sensors, measurement devices, appliances, thermostats, security systems, etc. that may have limited processing power and/or resources relative to the heavy node 102 but are capable of communicating with the heavy node 102 and utilizing the application components 110 and/or providing information/data for the application components 110. The example light nodes 130 may be from a same or different entity (e.g., a vendor that provides the application service to a consumer, such as a user of the heavy node 102 and/or at least one of the light nodes 130 . . .”) from an application managed by an application orchestration system (e.g. Fig. 1 application orchestrator 120) (see ¶¶ [0017-0018] “ . . . the application orchestrator 120 and the application overlay network managers 112 of the application components 110 coordinate to enable the application components 110 to establish, join, and/or adjust overlay networks that enable sharing of data between the application components 110 and/or with application components of other heavy nodes of the heavy node network 104. Accordingly, an application overlay network may be dynamically controlled via the application orchestrator 120 and the overlay network managers 112. For example, in response to characteristics of an overlay network or changes in the overlay network (e.g., an application component joining or leaving, load or resources for heavy nodes and/or application components changing, etc.), or in response to characteristics of the heavy node network (e.g., a heavy or light node joining or leaving), the topology of an overlay network may be adjusted accordingly (e.g., by opening or closing communication tunnels between heavy nodes and/or light nodes and/or the application components 110 of the overlay network system 100). . . . in connection with FIGS. 2 and 3, the application orchestrator 120 and the overlay network managers 112 of the application components 110 manage configuration, adjustment, and shutdown of application overlay networks in the overlay network system 100 of FIG. 1. The example overlay networks may be used to share information among the application components based on the configuration and management of the overlay network by the application orchestrator 120 and the application overlay network managers 112 of the application components 110. . . .”) ;
obtaining, using a first watcher (one of the light nodes 130 from Fig. 1), a first type of application configuration (see ¶ [0036] “ . . . The application component 410A may then identify the application components 410B-410G and send an invite to the application components 410B-410G to join the overlay network of the application component 410A. In response to receiving the invite, each of the overlay network manager 412B-412F may determine, via the network decision manager 320, whether to join the overlay network of 410A. In the illustrated example of FIG. 4, the application components 410B, 410D, 410F accepted the invitation to join the network while the application components 410C, 410E, 410G declined the invitation to join the overlay network. Such decisions may be based on the characteristics of the requesting application component 410 (e.g., type, owner, etc.) or configuration of the overlay network in the request (e.g., performance requirements, communication requirements, number of members, environmental, etc.). Accordingly, upon accepting the invitations, the application components 410A, 410B, 410D, 410F may communicate directly with one another via the overlay network 402. For example, the application components 410A, 410B, 410D, 410F may provide data received from light nodes (e.g., the light nodes 130 of FIG. 1) utilizing or providing information to the application components 410A, 4106, 410D, 410F. As such, the application components 410A, 410B, 410D, 410F may have faster access to more data without necessarily referring to application components located in a cloud network or other distant network. The application components 410A, 410B, 410D, 410F may store, access, and utilize application data closer to the edge relative to a cloud network or cloud application service. The application component 410A may then terminate the application overlay network when the purpose for the interactions has been satisfied. The application component overlay network managers 412A, 412B, 412D, 412F then terminate the tunnels used for the communications. In this way application overlay networks are dynamic, secure, and application driven. . . .”) ;
obtaining, using a second watcher, a second type of application configuration (see Fig. 1. Fig. 4 and ¶¶ [0034-0036] which describes how components of the application orchestrator resident in the heavy node processes information received from the light nodes that utilize different applications in the network. ;
determining, using the first type of application configuration (see Fig. 2. Fig. 5 ¶ [0043] “ . . . In some examples of the process 500 of FIG. 5, the policy manager 230 may identify characteristics of the overlay network (e.g., a service to be provided, a type of data to be shared, performance requirements, communication requirements etc.), and the policy manager may select the candidate application components based on the ability of the candidate application components to share data according to the characteristics of the overlay network. In some examples, the configuration information of block 530 indicates a topology of the overlay network and indicates the application components of the set of application components. The example changes of the environment of the overlay network in block 540 may comprise an application component of the set of application components shutting down (e.g., in response to the onboarding manager 220 determining that the application component is to be shutdown based on characteristics of the environment or the overlay network). In some examples, the changes of the environment of the overlay network in block 540 may include an application component of the set of application components being unable to meet performance requirements of the overlay network as specified by the characteristics of the overlay network in the request. In some examples of the process 500, the policy manager 230 may determine that a new application component is capable of joining the network based on characteristics of the overlay network in the request after the overlay network is created and the onboarding manager indicates to the requesting application component that the new application component is available to be added to the overlay network. In some such examples, the onboarding manager 220 and/or the policy manager 230 may receive an indication of whether the new application component is added to the overlay network (e.g., so that a registry for the overlay network managed by the policy manager 230 can correspondingly be updated). Additionally or alternatively, in the example process 500 of FIG. 5, the policy manager 230 may receive changes to a configuration of the overlay network in the configuration information, and the policy manager 330 may adjust policies for the overlay network based on the changes . . .”) ;
determining, using the second type of application configuration (see Fig. 2. Fig. 5 ¶ [0043] as shown above and also ¶¶ [0023-0024] “ . . . The example policy manager 230 of FIG. 2 manages policies for application components 110 of FIG. 1. For example, the policy manager 230 may include a policy administration point (PAP), policy decision point (PDP) and a policy information point (PIP). The example PAP may be used to define policies for the application components 110 and/or overlay networks of the application components 110. Different vendors and/or a DMC owner may specify polices for application components under their administration. The PDP may be used for overlay network management whenever an application overlay network event takes place in an overlay network. For example, when an application component 110 attempts to create or join an overlay network, a policy rule of the PDP may be used to verify or determine that the application component 110 is able to create or join the overlay network, respectively. Further, policies may include obligations. The example policy manager 230 supports attribute-based and role-based access control. Accordingly, situational attributes that may affect policy (e.g., performance, load, capacity, topology, etc.) may be used to control and configure an application overlay network. Furthermore, interfaces (instructions for interacting with the overlay network, such as, how often to send/receive data, what data to provide/retrieve, etc.) may be assigned or selected by application components and may be an attribute of policy. The PDP of the policy manager 230 may also be used for overlay network management when a related system event takes place. For example, when a light node 130 joins or departs a heavy node 102 a request may be made by an onboarding manager 220 to evaluate a policy to determine what obligations are associated with the policy associated with that event. In examples herein, when an event for management of an application overlay network takes place or a related system event takes place, the policy manager 230 evaluates a policy and may provide a number of obligations (e.g., performance objectives/requirements, communication objectives/protocols, etc.). The example obligations may include instructions for launching or shutting down new application services and/or changing configurations (e.g., topology, communication paths, communication protocol, etc.) of overlay networks . . .”) ; and
causing the first network operation and the second network operation to be performed in the network such that a configuration (see Fig. 5 ¶ [0042] “ . . . At block 530 the environment monitor 210 receives information on the environment of the overlay network from the requesting application component (e.g., the owner of the overlay network). The overlay network of block 530 includes the requesting application component and a set application components from list of the candidate application components. At block 540, the onboarding manager 540 instructs the requesting application component to change a configuration of the overlay network in response to changes in the environment of the overlay network or changes in characteristics of at least one of the set of application components that conflict with policies of any of the set of application components. After block 540, the example process 540 ends. . . “) .
Rolia fails to expclitly explicitly teach,
However Smith teaches state data (e.g. state information) (e.g. the ECN of Smith corresponds to the light nodes described in Rolia) (see Smith ¶ [0163] “ . . . the processor may coordinate the offloading of the application or function from the original ECN to the identified alternative ECN in the external group. For example, the processor may initiate a series of automated steps to facilitate a smooth transition. This may include sending configuration details and specific requirements of the application to the alternative ECN, ensuring it is prepared to take over the application. The processor may also manage the synchronization of data and state information between the original and new ECNs to maintain continuity and minimize downtime during the transition. In addition, the processor may establish network routing changes to redirect traffic to the new ECN and update any relevant network policies or settings to support the newly offloaded application. This coordination may be done in a manner to ensure a seamless handover with minimal impact on the end-user experience and overall network performance. . . .”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate systems, devices, and methods for creating a versatile elastic edge compute system utilizing edge computing nodes (ECNs) where the ECNs may be configured to identify ECNs in the vEEC and their capabilities, determine resource requirements for one or more software applications or tasks within the vEEC system, dynamically scale network resources based on the determined requirements, resolve network congestion by redistributing tasks among ECNs based on network traffic analysis, implement failover to cloud resources for ECNs that face resource limitations, offload computational tasks from edge devices, monitor network performance and resource utilization for adjustments, and refine resource allocation models and system configurations based on feedback and performance metrics, as taught by Smith, into systems, devices, and methods for managing an application overlay network and configuring a platform using nodes where policies may be implemented, managed, and adjusted to govern access to data, participation in distributed analytics, and access to control interfaces, such that policies may be situationally dependent and adjustable based on conditions of an environment of the application components and/or the platform, as taught by Rolia. Such incorporation provides further application state details for dynamically optimizing the network based on current application needs.
In regard to claim 2, the combination of Rolia and Smith teaches wherein:
the first watcher is an ingress watcher that obtains an ingress traffic definition associated with ingress traffic of the application (see Smith Fig. 3C ¶ [0097] “ . . . FIG. 3C illustrates a network configuration in which multiple edge computing nodes (ECNs), specifically ECNs 306b and 306c, are interconnected through an ECN 306a, which operates as the master in the vEEC system. The master ECN 306a may connect with a variable number of ECNs, contingent upon its capabilities and the resources it can offer. The resources provided by the master ECN 306a may include Wide Area Network (WAN) connectivity, as well as policy routing for both ingress and egress traffic. In addition, the master ECN 306a may facilitate policy routing among the ECNs 306 and may have access to local images and applications, which other ECNs 306 may utilize through the vEEC agent . . .”)
the first type of application configuration or state data is an ingress traffic definition obtained by the ingress watcher and is associated with ingress traffic of the application (see Smith Fig. 3D ¶ [0098] “ . . FIG. 3D illustrates a more distributed architecture vEEC architecture in which ECNs 306b, 306c and 306d each have one or more edge devices connected to them. This configuration exemplifies a distributed approach in the vEEC system. In the examples illustrated in FIG. 3D the ECNs 306 form a star network configuration in which each ECN 306b, 306c, 306d operating as a vEEC agent is connected directly to the vEEC master 306a. However other network topology configurations with ECNs in FIG. 3D are possible. The vEEC agents in FIG. 3D may operate as standalone nodes without direct connectivity to other ECN nodes, except for the master ECN 306a, which serves as the vEEC master. The vEEC master 306a may facilitate communication between ECN nodes 306b, 306c, 306d based on various factors such as policy, application, resiliency scheme, or other considerations. In addition, each ECN 306b, 306c, 306d operating as a vEEC agent may also be connected to a WAN or a cloud environment featuring a vEEC master 308 for orchestration. . . .”).; and
causing the first network operation to be performed in the network includes causing the ingress traffic to be sent to the application via a networking path of the network that is optimized for sending the ingress traffic to the application (see Smith ¶ [0131] “ . . . the vEEC orchestrator may be configured to fully leverage the advantages of a distributed edge environment. This may include managing a large number of sites, edge devices, and enterprise-specific applications concurrently. It may also include providing visibility into the status and connectivity of devices, assigning network assets, deploying and configuring applications, and enforcing Quality of Service (QOS) and policies. The orchestrator may also be configured to dynamically adjust to changing network conditions, ensure failover and recovery resiliency, manage network and security configurations of edge devices, and support various network types such 4G/5G/6G/Wi-Fi. In addition, the vEEC orchestrator may be configured to optimize traffic delivery and support bring your own (BYO) certified applications and network hardware. . . .”).
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein. Additionally, Smith provides information regarding incoming traffic to a node influenced by application concerns.
In regard to claim 3, the combination of Rolia and Smith teaches wherein the ingress traffic definition includes at least one of a destination internet protocol (IP) address associated with the application (see Smith ¶ [0343] “ . . . the processor may scan the network to detect connected devices and ECNs. In some embodiments, scanning the network may include using network protocols including SNMP, ARP, ICMP, and others to identify devices across different network segments. For example, the processor may execute a series of commands to send requests to devices on the network, using network protocols (e.g., SNMP, ARP, ICMP, etc.) to gather data such as IP addresses, device types, and connectivity status. . . “), a destination port (e.g. MAC address) associated with the application (see Smith ¶ [0344] “ . . . the processor may identify each detected device and ECN by retrieving their identification information including IP addresses, MAC addresses, device type, and other metadata. For example, the processor may parse the responses received from the network scan to extract and categorize the information. . . “), a hostname associated with the application, or a uniform resource locator (URL) associated with the application (e.g. network topology) (see Smith ¶ [0345] “ . . .the processor may create and update a map of the network topology to reflect the interconnections between detected devices and ECNs. In some embodiments, the processor may use software tools to generate a map information structure that identifies how each device is connected and the pathways data takes across the network. In some embodiments, the processor may generate a network topology map that provides a visualization of how devices and ECNs are interconnected within the network.
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein. Additionally, Smith collects further information of incoming traffic that can be used to update characteristics and capabilities of the network.
In regard to claim 4, the combination of Rolia and Smith teaches wherein:
the first watcher is an encryption watcher (see Smith ¶ [0204] “ . . . the vEEC Agent may initiate security parameters for the container. For example, the processor may configure security settings, such as authentication and encryption, within the container environment . . .”) that determines whether traffic communicated with the application requires encryption (see Smith ¶ [0271] “ . .. the processor may establish communication protocols between the vEEC server and the vEEC agents. For example, the processor may implement secure MQTT protocols for IoT devices communicating with the vEEC server, ensuring encrypted data transmission and authentication. In some embodiments, establishing communication protocols may include configuring network routes, encryption standards, and/or authentication mechanisms . . .”) ;
the first type of application configuration or state data is an encryption policy for the application indicating that the traffic requires encryption ((see Smith ¶ [0271] “ . . . the processor may configure network routes to facilitate direct and efficient data transmission paths between the vEEC server and its associated agents, which may in turn reduce latency and ensure quick response (especially in applications that rely on real-time data processing). The processor may also implement robust encryption standards to secure the communication channels, set up authentication mechanisms to verify the identity of the vEEC agents communicating with the server, and tailor the communication protocols to accommodate the specific requirements of different edge devices and applications. For example, in a scenario involving a large number of IoT devices with varying data transmission needs, the processor may establish a combination of communication protocols (e.g., MQTT for devices requiring minimal bandwidth and HTTP for those engaged in more complex interactions). By establishing these communication protocols, the processor may help ensure that the vEEC server and its agents may communicate, collaborate, and operate in an efficient, secure, and harmonized manner. . . “) ; and
causing the first network operation to be performed in the network includes: determining that traffic being communicated to the application is not encrypted (see ¶ [0342] “ . . . the processor may also implement security measures to protect the data collected by the network discovery module and ensure compliance with privacy standards. For example, the processor may encrypt the data collected during the discovery process and implement access controls . . .”); and based on the traffic not being encrypted and on the encryption policy (see ¶ [0371] “ . . . establishing the communication protocols between the vEEC server and the vEEC agents may include configuring network routes, encryption standards, and authentication mechanisms. . . .”), causing a network device in the network to encrypted the traffic (see Smith ¶ [0313] “ . . . the processor may enforce security measures for MQTT communication, including encryption and client authentication. For example, the processor may enforce security in MQTT communication by implementing SSL/TLS encryption for data transmission and requiring client authentication for access control.
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein. Additionally, Smith deploys encryption services for application traffic if the application requires it.
In regard to claim 5, the combination of Rolia and Smith teaches wherein:
the application orchestration system manages different instances of the application at a first site and a second site that are remote from each other (see Smith ¶ [0062] “ . . . the methods may further include configuring the vEEC orchestrator to support compatibility with a range of networks and devices from different vendors to avoid vendor lock-in, using network-as-a-service (NaaS) technologies for deploying and configuring applications across geographically distributed edge devices, providing visibility into the status and connectivity of devices, and dynamically adjusting to changing network conditions . . .” see Smith ¶ [0091] “ . . . Multiple devices (e.g., ECNs 306, user devices 102, etc.) may be related to a particular edge application deployment. Some embodiments may establish “trusted domains” or “groupings” to manage these devices and the application they are running. For example, some embodiments may group all the devices that are related to a particular edge application deployment into the same group or trusted domain. The embodiments may allow all devices that are grouped into the same trusted domain to communicate and share data with each other securely and efficiently, without cumbersome verification or authentication procedures. These groups may allow an edge application to operate in a distributed manner at the network edge on the devices that are best equipped or best suited for the specific tasks to which they are assigned. . . .”) ;
the first watcher is a capacity watcher that obtains capacity data that indicates available amounts of capacity at the first site and the second site (see Smith ¶ [0058] “ . . . orchestrating network operations in a virtual Edge Enhanced Computing (vEEC) system. In some embodiments, the methods may include providing a vEEC server or vEEC Master (which may be located in the cloud, on-premise, or outside the cloud), enabling the vEEC server to function as the master server based on service policies, associating several vEEC agents with edge computing nodes (ECNs) and edge devices (EDs), facilitating the seamless transition of master server role among ECNs in the event of connectivity loss (e.g., based on an algorithm that evaluates the ECNs' resources and connectivity status, etc.), managing network resources within an integrated system (including computing, cloud services, storage, networking, and security), and dynamically scaling resources according to specific requirements (e.g., capacity, bandwidth, and latency). . . .”);
the first type of application configuration or state data is the capacity data (see Smith ¶ [0063] “ . . . ensuring failover and recovery resiliency, optimizing data and services in terms of capacity, latency, and delivery, and supporting adaptive monitoring, network and security configurations, traffic delivery optimization, and BYO-certified applications and network hardware. . . .”) ;
determining the first network operation to perform in the network includes determining to route traffic to the first site based on the first site having more available capacity as compared to the second site (see Smith ¶ [0065] “ . . . The processor may repeatedly or continuously monitor network performance and resource utilization, detect and address network congestion by redistributing resources among ECNs, implement failover protocols to transfer tasks to cloud resources in case of resource limitations at the ECN level, enhance application capabilities on edge devices with limited resources through compute distribution schemes, and offload tasks from edge devices to more powerful servers when needed. . . .”) ; and
causing the first network operation to be performed in the network includes causing the traffic to be sent to an instance of the application running at the first site (see Smith ¶¶ [0116 -0118] “ . . . the vEEC orchestrator may be configured to address congestion challenges in ECNs, which often arise from high demand and limited resources. Traditional edge computing methods struggle with adjusting computing resources or wireless capacity at the edge in real-time. The vEEC orchestrator allows for resource redistribution among ECNs in a heterogeneous environment, balancing loads across multiple nodes to mitigate individual node constraints . . . the vEEC orchestrator may be configured to enhance the application capabilities of edge devices constrained by limited resources. This is achieved by implementing a compute distribution scheme that allows the ECNs to surpass their inherent limitations and improve the diversity and quality of applications and user experiences . . . “; see Smith ¶ [0131] “ . . . the vEEC orchestrator may be configured to fully leverage the advantages of a distributed edge environment. This may include managing a large number of sites, edge devices, and enterprise-specific applications concurrently. It may also include providing visibility into the status and connectivity of devices, assigning network assets, deploying and configuring applications, and enforcing Quality of Service (QOS) and policies. The orchestrator may also be configured to dynamically adjust to changing network conditions, ensure failover and recovery resiliency, manage network and security configurations of edge devices, and support various network types such 4G/5G/6G/Wi-Fi. In addition, the vEEC orchestrator may be configured to optimize traffic delivery and support bring your own (BYO) certified applications and network hardware. . . .”).
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein. Additionally, Smith monitors the capacities of nodes at different geographical locations and distributes traffic for the distributed application to optimize performance,
In regard to claim 6, the combination of Rolia and Smith teaches further comprising sending, by a network state propagator of the application watcher system, updated state data to the application orchestration system such that the application orchestration system is apprised of the state of the network being modified (see Rolia ¶ [0047] “ . . . the orchestrator interface 310 may provide information on the configuration of the overlay network to the application orchestrator 120. For example, the overlay network manager 112 may send status updates providing the information on the configuration of the overlay network to the application orchestrator 120 periodically, such as every minute, every hour, etc. or periodically (e.g., any time a communication is sent between members of the overlay network, or a member joins or leaves the network). In some examples, after a configuration change occurs in the overlay network, the orchestrator interface 310 may indicate a configuration change to the application orchestrator. In some examples, the orchestrator interface 310 may receive (e.g., from the application orchestrator 120) an indication that anew application component is able to join the overlay network and the network decision manager 320 may determine whether to invite the new application component to join the overlay network based on characteristics of the overlay network. The network decision manager 320 may implement a PDP or interact with the policy manager 230 of the application orchestrator 120 to resolve its decision, which may indicate that the new application component is to be invited and the message manager 330 may send an invite to the new application component and the orchestrator interface 310 may indicate to the application orchestrator 120 whether the new application component joined the network based on whether the new application component accepted the invitation to join the overlay network. . . .”)
In regard to claim 7, the combination of Rolia and Smith teaches the operations further comprising: allocating a first amount of bandwidth of a physical underlay of the network for data flows associated with the application (see Smith ¶ [0246] “ . . . the processor may dynamically scale network resources based on the determined requirements, which may include adjusting capacity, bandwidth, and/or latency thresholds (or otherwise setting or modifying specific limits or parameters for capacity, bandwidth, and latency) that act as reference points or benchmarks that dictate how resources are allocated and managed within the network. For example, the processor may increase the bandwidth allocation for an ECN that is managing a sudden surge in video conferencing traffic during peak business hours. Such an adjustment may help ensure that the video conferencing application receives enough bandwidth to maintain high-quality video and audio streams without lag. As another example, the processor may increase the processing capacity of the ECN to handle the increased data flow (e.g., during peak activity periods such as rush hour, etc.) in response to determining that the ECN is tasked with processing real-time data from an array of IoT devices. As yet another example, the processor may prioritize and reconfigure network paths to reduce latency for tasks that require low latency. These dynamic adjustments may help ensure that the network resources are optimally utilized, that the performance requirements of different applications are met, that efficient and uninterrupted service delivery is maintained across the vEEC system, etc. , . . .”)
wherein: the first watcher is a replica watcher that obtains replica data (e.g. excess data) that indicates a change in an amount of computing resources that are allocated to host the application (see Smith ¶ [0243] “ . . . the processor may establish communication links with cloud resources in block 1902. For example, the processor may set up secure and efficient data transmission channels to a cloud-based server for additional computational support or data storage. As an example, consider a network of ECNs deployed in infrastructure for traffic management and environmental monitoring. As part of the initialization operations in block 1902, the processor in each ECN may establish a connection to cloud services to offload excess data for long-term storage and to leverage cloud computing power for intensive data analysis tasks that are beyond the local processing capabilities of the ECNs. This connection may be important during events that generate large amounts of data (e.g., city-wide festivals, emergencies, etc.) for which local ECN resources might be insufficient. By establishing these cloud links, the processor may help ensure that the vEEC system and/or ECN network remains scalable, flexible, and capable of handling varying workloads. As a result, the processor may improve the overall efficiency and reliability of the vEEC system . . .”).;
determining the first network operation to perform in the network includes determining, based at least in part on the replica data, a second amount of bandwidth of the physical underlay to allocate for the data flows (see Smith ¶ [0245] “ . . . the processor may assess resource requirements for one or more software applications or tasks within the vEEC system. For example, the processor may evaluate a video streaming application's need for high bandwidth and low latency to ensure uninterrupted service. This may include analyzing data traffic patterns, video resolution demands, and expected user count to determine the necessary network bandwidth and processing power. The processor may allocate more resources in response to determining that the application is expected to handle high-definition streaming for a large number of users. As another example, the processor may evaluate the computational power required to quickly process a large amount of sensor data. The processor may consider factors such as data ingestion rates, processing speed needed for real-time analysis, and the storage required for accumulating historical data. These assessments may allow the processor to allocate resources dynamically and ensure that each application or task within the vEEC system has the necessary computational power, storage, and network capacity to function correctly . . .”); and allocating the second amount of bandwidth of the physical underlay of the network for the data flows associated with the application (see Smith ¶¶ [0247-0249] “ . . . dynamically scaling network resources in block 1908 may include updating the resource allocations in real-time. For example, the processor may detect an increase in demand for video streaming services in a residential area managed by the ECN (e.g., due to a popular event being broadcast, etc.). In response, the processor may immediately allocate additional bandwidth and processing power to the ECNs handling this area (or redistribute resources from less critical tasks or other ECNs, etc.) to help ensure that users do not experience any degradation in streaming quality. These adjustments may be performed in real-time, allowing the system to adapt swiftly to the changing demands. In block 1910, the processor may resolve network congestion by redistributing tasks among ECNs based on network traffic analysis. For example, the processor may detect a bottleneck in data flow within a segment of a surveillance system due to multiple high-definition video feeds being processed simultaneously. To alleviate this congestion, the processor may redistribute some of the video processing tasks to adjacent ECNs that are currently underutilized, thereby balancing the load across the network and ensuring smooth video analysis. As another example, the processor may reroute some data processing tasks to other ECNs with more available capacity in response to determining that a group or cluster of ECNs in a healthcare network is experiencing heavy traffic due to numerous connected medical devices simultaneously sending patient data. This redistribution may ease the burden on the congested ECNs and prevent potential delays in critical data processing (e.g., real-time monitoring of patient vitals, etc.). In some embodiments, resolving the network congestion in block 1910 may include adjusting routing paths to alleviate traffic load. For example, the processor may identify a bottleneck in the data flow within an ECN network that manages communication between various smart devices in a home automation system. In response, the processor may reroute some of the data traffic through less congested paths to distribute the load more evenly across the network. Such rerouting may include prioritizing critical data (e.g., security alerts, etc.) over less urgent data (e.g., routine temperature readings, etc.). As another example, the processor could detect a congestion point impacting the timely delivery of critical health data in a hospital network in which multiple devices continuously transmit patient data. In response, the processor may adjust the routing paths within the ECN network (e.g., by creating dedicated paths or tunnels for high-priority data, etc.) to help ensure that vital patient information is transmitted efficiently without delay . . .”)
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein. Additionally, Smith determines the reallocation of resources for the network based on traffic indications.
In regard to claim 8, Rolia teaches A computer-implemented method for a network orchestrator (e.g. heavy node) to manage a network and execute an application watcher system (see Fig. 1, ¶ [0002] as described for the rejection of claim 1 and is incorporated herein) , the method comprising:
executing a plurality of watchers (e.g. light nodes 130 as shown in Fig. 1) , wherein each of the plurality of watchers are configured to obtain different types of application configuration (see Fig. 1 ¶ [0016] as described for the rejection of claim 1 and is incorporated herein) from an application managed by an application orchestration system (e.g. Fig. 1 application orchestrator 120) (see ¶¶ [0017-0018] as described for the rejection of claim 1 and is incorporated herein) ;
obtaining, using a first watcher (one of the light nodes 130 from Fig. 1) , a first type of application configuration(see ¶ [0036] as described for the rejection of claim 1 and is incorporated herein);
obtaining, using a second watcher, a second type of application configuration (see Fig. 1. Fig. 4 and ¶¶ [0034-0036] which describes how components of the application orchestrator resident in the heavy node processes information received from the light nodes that utilize different applications in the network. ;
determining, using the first type of application configuration (see Fig. 2. Fig. 5 ¶ [0043] as described for the rejection of claim 1 and is incorporated herein) ;
determining, using the second type of application configuration (see Fig. 2. Fig. 5 ¶ [0043] as shown above and also ¶¶ [0023-0024] as described for the rejection of claim 1 and is incorporated herein) ; and
causing the first network operation and the second network operation to be performed in the network such that a configuration or state of the network is modified (see Fig. 5 ¶ [0042] as described for the rejection of claim 1 and is incorporated herein)
Rolia fails to expclitly explicitly teach,
However Smith teaches state data (e.g. state information) (e.g. the ECN of Smith corresponds to the light nodes described in Rolia) (see Smith ¶ [0163] as described for the rejection of claim 1 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein.
In regard to claim 9, the combination of Rolia and Smith teaches wherein:
the first watcher is an ingress watcher that obtains an ingress traffic definition associated with ingress traffic of the application (see Smith Fig. 3C ¶ [0097] as described for the rejection of claim 2 and is incorporated herein) ;
the first type of application configuration or state data is an ingress traffic definition obtained by the ingress watcher and is associated with ingress traffic of the application (see Smith Fig. 3D ¶ [0098] as described for the rejection of claim 2 and is incorporated herein) ; and
causing the first network operation to be performed in the network includes causing the ingress traffic to be sent to the application via a networking path of the network that is optimized for sending the ingress traffic to the application (see Smith ¶ [0131] as described for the rejection of claim 2 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 2 and is incorporated herein.
In regard to claim 10, the combination of Rolia and Smith teaches wherein the ingress traffic definition includes at least one of a destination internet protocol (IP) address associated with the application (see Smith ¶ [0343] as described for the rejection of claim 3 and is incorporated herein), a destination port (e.g. MAC address) associated with the application (see Smith ¶ [0344] as described for the rejection of claim 3 and is incorporated herein), a hostname associated with the application, or a uniform resource locator (URL) associated with the application (e.g. network topology) (see Smith ¶ [0345] as described for the rejection of claim 3 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 3 and is incorporated herein.
In regard to claim 11, the combination of Rolia and Smith teaches wherein:
the first watcher is an encryption watcher (see Smith ¶ [0204] as described for the rejection of claim 4 and is incorporated herein) that determines whether traffic communicated with the application requires encryption (see Smith ¶ [0271] as described for the rejection of claim 4 and is incorporated herein) ;
the first type of application configuration or state data is an encryption policy for the application indicating that the traffic requires encryption (see Smith ¶ [0271] as described for the rejection of claim 4 and is incorporated herein); and
causing the first network operation to be performed in the network includes: determining that traffic being communicated to the application is not encrypted (see Smith ¶ [0342] as described for the rejection of claim 4 and is incorporated herein) ; and based on the traffic not being encrypted and on the encryption policy (see Smith ¶ [0371] as described for the rejection of claim 4 and is incorporated herein), causing a network device in the network to encrypted the traffic (see Smith ¶ [0313] as described for the rejection of claim 4 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 4 and is incorporated herein.
In regard to claim 12, the combination of Rolia and Smith teaches wherein:
the application orchestration system manages different instances of the application at a first site and a second site that are remote from each other (see Smith ¶ [0062], ¶ [0091] as described for the rejection of claim 5 and is incorporated herein) ;
the first watcher is a capacity watcher that obtains capacity data that indicates available amounts of capacity at the first site and the second site (see Smith ¶ [0058] as described for the rejection of claim 5 and is incorporated herein) ;
the first type of application configuration or state data is the capacity data (see Smith ¶ [0063] as described for the rejection of claim 5 and is incorporated herein);
determining the first network operation to perform in the network includes determining to route traffic to the first site based on the first site having more available capacity as compared to the second site (see Smith ¶ [0065] as described for the rejection of claim 5 and is incorporated herein) ; and
causing the first network operation to be performed in the network includes causing the traffic to be sent to an instance of the application running at the first site (see Smith ¶¶ [0116 -0118] ¶ [0131] as described for the rejection of claim 5 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 5 and is incorporated herein.
In regard to claim 13, the combination of Rolia and Smith teaches further comprising sending, by a network state propagator of the application watcher system, updated state data to the application orchestration system such that the application orchestration system is apprised of the state of the network being modified (see Rolia ¶ [0047] as described for the rejection of claim 6 and is incorporated herein).
In regard to claim 14, the combination of Rolia and Smith teaches the operations further comprising: allocating a first amount of bandwidth of a physical underlay of the network for data flows associated with the application (see Smith ¶ [0246] as described for the rejection of claim 7 and is incorporated herein),
wherein: the first watcher is a replica watcher that obtains replica data (e.g. excess data) that indicates a change in an amount of computing resources that are allocated to host the application (see Smith ¶ [0243] as described for the rejection of claim 7 and is incorporated herein) ;
determining the first network operation to perform in the network includes determining, based at least in part on the replica data, a second amount of bandwidth of the physical underlay to allocate for the data flows (see Smith ¶ [0245] as described for the rejection of claim 7 and is incorporated herein) ; and
allocating the second amount of bandwidth of the physical underlay of the network for the data flows associated with the application (see Smith ¶¶ [0247-0249] as described for the rejection of claim 7 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 7 and is incorporated herein.
In regard to claim 15, Rolia teaches A system associated with a network orchestrator (e.g. heavy node) that manages a network and executes an application watcher system (see Fig. 1, ¶ [0002] as described for the rejection of claim 1 and is incorporated herein), the system comprising:
one or more processors (see Fig. 7, ¶ [0051] as described for the rejection of claim 1 and is incorporated herein) ; and
one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors (see (see Fig. 7, ¶ [0057-0058] as described for the rejection of claim 1 and is incorporated herein), cause the one or more processors to perform operations comprising:
executing a plurality of watchers (e.g. light nodes 130 as shown in Fig. 1) of the application watcher system, wherein each of the plurality of watchers are configured to obtain different types of application configuration (see Fig. 1 ¶ [0016] as described for the rejection of claim 1 and is incorporated herein) from an application managed by an application orchestration system (e.g. Fig. 1 application orchestrator 120) (see ¶¶ [0017-0018] as described for the rejection of claim 1 and is incorporated herein) ;
obtaining, using a watcher (one of the light nodes 130 from Fig. 1) , a type of application configuration (see ¶ [0036] as described for the rejection of claim 1 and is incorporated herein) ;
determining, using the type of application configuration (see Fig. 2. Fig. 5 ¶ [0043] as described for the rejection of claim 1 and is incorporated herein); and
causing the network operation to be performed in the network such that a configuration or state of the network is modified (see Fig. 5 ¶ [0042] as described for the rejection of claim 1 and is incorporated herein).
Rolia fails to expclitly explicitly teach,
However Smith teaches state data (e.g. state information) (e.g. the ECN of Smith corresponds to the light nodes described in Rolia) (see Smith ¶ [0163] as described for the rejection of claim 1 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 1 and is incorporated herein.
In regard to claim 16, the combination of Rolia and Smith teaches wherein:
the watcher is an ingress watcher that obtains an ingress traffic definition associated with ingress traffic of the application (see Smith Fig. 3C ¶ [0097] as described for the rejection of claim 2 and is incorporated herein) ;
the type of application configuration or state data is an ingress traffic definition obtained by the ingress watcher and is associated with ingress traffic of the application (see Smith Fig. 3D ¶ [0098] as described for the rejection of claim 2 and is incorporated herein) ; and
causing the network operation to be performed in the network includes causing the ingress traffic to be sent to the application via a networking path of the network that is optimized for sending the ingress traffic to the application (see Smith ¶ [0131] as described for the rejection of claim 2 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 2 and is incorporated herein.
In regard to claim 17, the combination of Rolia and Smith teaches wherein the ingress traffic definition includes at least one of a destination internet protocol (IP) address associated with the application (see Smith ¶ [0343] as described for the rejection of claim 3 and is incorporated herein) , a destination port (e.g. MAC address) associated with the application (see Smith ¶ [0344] as described for the rejection of claim 3 and is incorporated herein), a hostname associated with the application, or a uniform resource locator (URL) associated with the application (e.g. network topology) (see Smith ¶ [0345] as described for the rejection of claim 3 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 3 and is incorporated herein.
In regard to claim 18, the combination of Rolia and Smith teaches wherein:
the watcher is an encryption watcher (see Smith ¶ [0204] as described for the rejection of claim 4 and is incorporated herein) that determines whether traffic communicated with the application requires encryption (see Smith ¶ [0271] as described for the rejection of claim 4 and is incorporated herein) ;
the type of application configuration or state data is an encryption policy for the application indicating that the traffic requires encryption (see Smith ¶ [0271] as described for the rejection of claim 4 and is incorporated herein);; and
causing the network operation to be performed in the network includes: determining that traffic being communicated to the application is not encrypted (see Smith ¶ [0342] as described for the rejection of claim 4 and is incorporated herein); and based on the traffic not being encrypted and on the encryption policy (see Smith ¶ [0371] as described for the rejection of claim 4 and is incorporated herein) , causing a network device in the network to encrypted the traffic (see Smith ¶ [0313] as described for the rejection of claim 4 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 4 and is incorporated herein.
In regard to claim 19, the combination of Rolia and Smith teaches wherein:
the application orchestration system manages different instances of the application at a first site and a second site that are remote from each other (see Smith ¶ [0062], ¶ [0091] as described for the rejection of claim 5 and is incorporated herein);
the watcher is a capacity watcher that obtains capacity data that indicates available amounts of capacity at the first site and the second site (see Smith ¶ [0058] as described for the rejection of claim 5 and is incorporated herein) ;
the type of application configuration or state data is the capacity data (see Smith ¶ [0063] as described for the rejection of claim 5 and is incorporated herein) ;
determining the network operation to perform in the network includes determining to route traffic to the first site based on the first site having more available capacity as compared to the second site (see Smith ¶ [0065] as described for the rejection of claim 5 and is incorporated herein) ; and
causing the network operation to be performed in the network includes causing the traffic to be sent to an instance of the application running at the first site (see Smith ¶¶ [0116 -0118] ¶ [0131] as described for the rejection of claim 5 and is incorporated herein).
The motivation to combine Smith with Rolia is described for the rejection of claim 5 and is incorporated herein.
In regard to claim 20, the combination of Rolia and Smith teaches further comprising sending, by a network state propagator of the application watcher system, updated state data to the application orchestration system such that the application orchestration system is apprised of the state of the network being modified (see Rolia ¶ [0047] as described for the rejection of claim 6 and is incorporated herein).
Conclusion
There are prior art made of record which are not relied upon but are considered pertinent to applicant’s disclosure. They are listed on the PTO-892 accompanying this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES N FIORILLO whose telephone number is (571)272-9909. The examiner can normally be reached on 7:30 - 5 PM Mon - Fri..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John A. Follansbee can be reached on 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES N FIORILLO/Primary Examiner, Art Unit 2444