DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to claims filed on 05/17/2023.
Claims 1-20 are pending.
Claim Objections
Claim 1 is objected to because of the following informalities: “inventory collectors that each collects inventory and configuration information from an underlying management system and publishes the collected information” should read “inventory collectors that each collect inventory and configuration information from an underlying management system and publish the collected information”. Appropriate correction is required.
Claims 7 and 18 are objected to because of the following informalities: “complex relationships that each represents a relationships between a pair of managed entities” should read “complex relationships that each represent a relationship between a pair of managed entities”. Appropriate correction is required.
Claims 1 objected to because of the following informalities: The claims recite the following limitation twice: “providing/provide an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices and one or more of the multiple streams/batch-processing components;”. Appropriate correction is required.
Claims 16 objected to because of the following informalities: “the ICMDB” should read “the CICMDB”. Appropriate correction is required.
Claims 17 and 18 are objected to because of the following informalities: The claims are dependent from claim 1 and appear to be substantial duplicates of claims 3 and 7 respectively which are also dependent from claim 1. However, claims 17 and 18 appear after independent claim 16. Examiner will interpret claims 17 and 18 to depend from claim 16. Appropriate correction is required.
Claims 2-15 and 17-19 depend, directly or indirectly, from objected to claims and do not resolve the deficiencies thereof and are therefore objected to for at least the same reasons.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 7 and 18 recite the limitations "wherein the complex nodes are each one of a first type of complex node" in lines 4-5 and "wherein the complex relationships are each one of a first type of complex relationship". It is unclear whether these limitations means that of the complex nodes and complex relationships, there exists at least one of each of the following types or that each complex node and complex relationship belong to at least one of the following types. For the sake of compact prosecution, Examiner will interpret this to mean that of the complex nodes and each of the complex relationships there is at least one of each type.
Claim 8 recites the limitation "and a general set of properties, or a reference to a general set of labels" in line 8. This limitation is unclear in the context of the preceding limitation: “a general set of labels, or a reference to a general set of labels”, because it is unclear whether these limitations are meant to refer to different general sets of labels. For the sake of compact prosecution, Examiner will interpret this to mean “and a general set of properties, or a reference to a general set of properties”.
Claims 8-15 depend, directly or indirectly, from rejected claims and do not resolve the deficiencies thereof and are therefore rejected for at least the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Datar (US 2023/0102572 A1) in view of Krishnan (US 2023/0138971 A1) in view of Tola (US 2020/0328885 A1).
With regard to claim 1, Datar teaches:
A meta-level management system that aggregates information contained in, and functionalities provided by, multiple underlying management systems, “In a complex system with multiple layers (also referred to herein as domains) in an infrastructure stack, there is a need for end-to-end visibility into alerts and issues. However, the full infrastructure stack includes multiple disparate layers of physical, virtualized, and clustered components. Each layer in the stack may provide visibility about the configuration of entities managed by that layer (underlying management system), but the data regarding each such layer may not provide a view of the full range of the stack. As a result, it is difficult to ascertain and evaluate the status of the full environment” [Datar ¶ 13]. “Examples described herein may provide for cross domain configuration extraction, topology mapping, and topology representation to provide application insights for layers of an infrastructure stack” [Datar ¶ 14].
the meta-level management system comprising: an MMS (schema) API that supports stitching; “Examples described herein may provide for stitching together a topology across the multiple disparate layers of an infrastructure stack, and providing a representation of the full topology for a user. The end-to-end topology may be across some or all layers such as the application, operating system (OS), virtualization, compute, and storage layers” [Datar ¶ 14]. “FIGS. 5A-5C illustrate example approaches for topology stitching. One or more approaches may be applied in performing topology stitching in end-to-end topology processing, such as topology stitching 360 performed by a backend server 350 as illustrated in FIG. 3” [Datar ¶ 42]. “The process 800 further proceeds with transforming the extracted data according to a schema at block 815, wherein the schema may include a schema that is agreed between a computing system and a backend server, such as schema 340 illustrated in FIG. 3. In some implementations, blocks 810 and 815 may be performed by a computing system (e.g., 300)” [Datar ¶ 83].
multiple stream/batch-processing components; “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “As shown in FIG. 6, a topology stitching pipeline may include receiving configuration data from a first set of datastores at stage 600, wherein configuration values for each batch for VM, OS, and application are processed to identify latest values, such as by evaluating the relevant timestamps, and as a second set of datastores at stage 605 to identify the latest configuration values at stage 610” [Datar ¶ 74, Fig. 6].
multiple collectors that collect information and events “For example, the computing system 300 may utilize available collectors to obtain configuration values for each of the domains for the layers of the stack. Collectors may be open source, provided by vendors, or otherwise available, and may include, e.g., APIs (application programming interfaces), SDKs (software development kits), or other tools such as Prometheus or Telegraf)” [Datar ¶ 29]. “Configuration changes are identified (which may be performed by, for example, using a SQL lag function) to obtain a last value of a column (in, for example, SQL or Spark SQL), and comparing this last value with a new value. Such a configuration change could be any of the events (a)-(e) identified above” [Datar ¶ 59].
and input the collected information to the central data bus, “The multiple configuration collectors may be independently executable, with minimal dependency between entities or objects reported by a single collector” [Datar ¶ 39].
including inventory collectors that each collects inventory “The multiple configuration collectors (inventory collectors) may be independently executable, with minimal dependency between entities or objects reported by a single collector” [Datar ¶ 39]. “The DSL can be used to translate and categorize an event that has occurred to one of the below events: (a) New entity; (b) Deleted entity; (c) New property for existing entity; (d) Deleted property for existing entity; or (e) Changed property for existing entity. On premise collectors may not be capable of deriving the delta, and thus the events will need to be derived at the backend server 350 for example” [Datar ¶ 51-57].
and configuration information from an underlying management system “For example, the computing system 300 may utilize available collectors to obtain configuration values for each of the domains for the layers of the stack. Collectors may be open source, provided by vendors, or otherwise available, and may include, e.g., APIs (application programming interfaces), SDKs (software development kits), or other tools such as Prometheus or Telegraf)” [Datar ¶ 29].
a comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"), “In terms of a graph database representation, the topology is a natural graph. Reported entities are reported as nodes (vertices) of the graph, where each vertex has a set of properties that get reported. Most of the "edges" (relationships) of the graph are derived based on these properties by the topology stitching algorithm” [Datar ¶ 78]. “The configuration files are then directed to a topology stitcher job 550, and to a graph database connector 580” [Datar ¶ 47].
that stores inventory and configuration information aggregated from the multiple underlying management systems; “The topology stitching 360 is performed by matching like attributes or properties in the configuration data from the different layers of the infrastructure stack to determine the interconnections between the layers and generate a full end-to-end view of the infrastructure stack. In one specific example, topology may be stitched together based on the domain knowledge of layers encapsulated in stitching metadata to create a relationship between a virtualization layer and an operating system layer if, for example, VM BIOS UUID from a virtualization layer collector is the same as host uuid from an OS collector. In this manner, the matching of attributes results in identifying sets of entities (which may be referred to as nodes) for which a relationship (which may also be referred to as an edge) should be created” [Datar ¶ 33].
and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus and uses the collected information to update the inventory and configuration information stored by the CICMDB. “The generated topology of a configuration stack may be maintained in the form of a view or "snapshot" that represents the complete topology at a particular time. In the first approach, the complete topology is stitched every time new snapshots are received, with the prior topology graph being deleted and replaced with the new topology graph. Alternatively, the nodes and edges of the topology snapshots may be overwritten and merged with the existing topology graph. In this manner, a static view of a full end-to-end stack is presented” [Datar ¶ 48]. “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/ update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “However, overwriting all relationships and nodes with a graph database may be very expensive in terms of processing. Further, topology changes may be infrequent, and thus the dynamic portion of the topology may be a small fraction of nodes and relationships for a system … As alternatives to the overwrite/merge approach to topology stitching, the following approaches allow incremental topology stitching” [Datar ¶ 49-50].
Datar fails to teach an MMS API and multiple component microservices, each providing a microservice API; an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices and a comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"), accessed by one or more of the multiple component microservices and the multiple stream/batch-processing components.
However, Krishnan teaches:
an MMS API “In embodiments, GraphQL may comprise a query language for an application programming interface (API) and/or a server-side runtime service for executing queries using a type system and/or the like that may be defined for content to be sought” [Krishnan ¶ 36]. “For example, API-side joins may combine all of the fields for a particular entity, even if spread across multiple subgraphs, so that a single integrated result may be returned to the client” [Krishnan ¶ 66].
multiple component microservices, each providing a microservice API; “Also, a federated GraphQL approach may not in at least some circumstances offer a durable and/or persistent store of data itself, but rather may be layered on top of underlying network services (e.g., GraphQL APIs, REST APIs, and/or microservices) that may in turn use a database or other data store” [Krishnan ¶ 53]. “A federated GraphQL approach may be agnostic to the underlying database or microservice technologies used and may be used to create a unified graph layer on top of multiple underlying microservices (e.g., REST APIs, gRPC, etc.) that may in turn each use different database technologies” [Krishnan ¶ 53].
an event-stream-system-implemented central data bus “Processor (e.g., processing device) 1120 and memory 1122, which may comprise primary memory 1124 and secondary memory 1126, may communicate by way of a communication bus 1115, for example” [Krishnan ¶ 156].
accessed by one or more of the multiple component microservices “Devices, such as IoT-type devices, for example, may include computing resources embedded into hardware so as to facilitate and/or support a device's ability to acquire, collect, process and/or transmit content over one or more communications networks” [Krishnan ¶ 2].
a comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"), accessed by one or more of the multiple component microservices and the multiple stream/batch-processing components, “Also, a federated GraphQL approach may not in at least some circumstances offer a durable and/or persistent store of data itself, but rather may be layered on top of underlying network services (e.g., GraphQL APis, REST APis, and/or microservices) that may in turn use a database or other data store… In this way, one may spread the data for an entity across multiple database tables and/or may join them together using a SQL query that may then be processed by a database query planning engine to create a query plan, execute it by fetching data from the underlying database tables on disk and/or collate and return the results to the client. In a similar way, a federated approach may allow one to spread the implementation of entity types in a graph across multiple subgraphs where a GraphQL gateway can process a query and/or join entity fields together by dynamically creating a query plan at runtime to advantageously (e.g., optimally) fetch the entity fields from the respective subgraph API servers using entity keys” [Krishnan ¶ 53].
Krishnan is considered to be analogous to the claimed invention because it is in the same field of database structures for information retrieval. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Datar to incorporate the teachings of Krishnan and include: an MMS API and multiple component microservices, each providing a microservice API; an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices and a comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"), accessed by one or more of the multiple component microservices and the multiple stream/batch-processing components. Doing so would allow for further optimization to the retrieval of information from different underlying management systems. “In a similar way, a federated approach may allow one to spread the implementation of entity types in a graph across multiple subgraphs where a GraphQL gateway can process a query and/or join entity fields together by dynamically creating a query plan at runtime to advantageously (e.g., optimally) fetch the entity fields from the respective subgraph API servers using entity keys” [Krishnan ¶ 53].
Datar in view of Krishnan fails to teach an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; and input the collected information to the central data bus, and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus and uses the collected information to update.
However, Tola teaches:
an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; “The system of FIG. 1 may be configured to operate in connection with an event bus or equivalent bus-based command and control system” [Tola ¶ 90]. “The virtual security kernel may also comprise one or more data modules, such as an event manager module for applying policies or to translate device events into an appropriate format for processing” [Tola ¶ 103].
and input the collected information to the central data bus, “The LDC may be comprised of one or more databases, such as a domain database, a device agent, and any management platform, portal, or console sufficient to enable data interactions” [Tola ¶ 78]. “While each device agent is configured to immediately establish a secure connection with the associated LDC, the domain database of the LDC may also communicate and send updates (for example, via BUS commands) to other domain databases located outside the LDC, as shown in FIG. 1” [Tola ¶ 88].
and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus and uses the collected information to update “In embodiments, the event bus is configured to send and receive data between local systems, or alternatively between known devices in a local system. In embodiments, an event manager distributes messages (command and control) to separately update the LDC database” [Tola ¶ 90].
Tola is considered to be analogous to the claimed invention because it is in the same field of database structures for information retrieval. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Datar in view of Krishnan to incorporate the teachings of Tola and include: an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; and input the collected information to the central data bus, and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus and uses the collected information to update. Doing so would allow for communication to be sent between the systems of the managed entities. “In embodiments, the event bus is configured to send and receive data between local systems, or alternatively between known devices in a local system. In embodiments, an event manager distributes messages (command and control) to separately update the LDC database” [Tola ¶ 90].
With regard to claim 2, Datar in view of Krishnan in view of Tola teaches the meta-level management system of claim 1 as referenced above. Datar further teaches wherein each inventory collector collects inventory and configuration information from only one underlying management system. “Such collectors may be leveraged for each of the layers of the stack for which there is interest” [Datar ¶ 29]. “Examples described herein may include multiple modular, stack-layer-specific configuration collectors. In this manner, each configuration collector may operate consistently with the protocols for an associated or corresponding layer or layers” [Datar ¶ 38]. “… for example, VM BIOS UUID from a virtualization layer collector is the same as host uuid from an OS collector. In this manner, the matching of attributes results in identifying sets of entities (which may be referred to as nodes) for which a relationship (which may also be referred to as an edge) should be created” [Datar ¶ 33].
With regard to claim 3, Datar in view of Krishnan in view of Tola teaches the meta-level management system of claim 1 as referenced above. Datar further teaches wherein an inventory collector associates information, collected from the underlying management system regarding a particular managed entity known to and/or managed by the underlying management system from which the inventory collector collects inventory and configuration information, with an entity ID that uniquely identifies the managed entity and the underlying management system. “The configuration data includes identification of entities that are within a layer and attributes of these identified entities” [Datar ¶ 21]. “The stitching metadata 365 may include information regarding entities of the multiple domains and attributes of such entities and rules based on which attributes can be matched to formulate relationships” [Datar ¶ 36]. “… for example, VM BIOS UUID from a virtualization layer collector is the same as host uuid from an OS collector. In this manner, the matching of attributes results in identifying sets of entities (which may be referred to as nodes) for which a relationship (which may also be referred to as an edge) should be created” [Datar ¶ 33].
With regard to claim 16, Datar teaches:
A method that efficiently stores inventory and configuration information within a meta-level management system that aggregates information contained in, and functionalities provided by, multiple underlying management systems, the method comprising: “In a complex system with multiple layers (also referred to herein as domains) in an infrastructure stack, there is a need for end-to-end visibility into alerts and issues. However, the full infrastructure stack includes multiple disparate layers of physical, virtualized, and clustered components. Each layer in the stack may provide visibility about the configuration of entities managed by that layer (underlying management system), but the data regarding each such layer may not provide a view of the full range of the stack. As a result, it is difficult to ascertain and evaluate the status of the full environment” [Datar ¶ 13]. “Examples described herein may provide for cross domain configuration extraction, topology mapping, and topology representation to provide application insights for layers of an infrastructure stack” [Datar ¶ 14].
providing a comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"); “In terms of a graph database representation, the topology is a natural graph. Reported entities are reported as nodes (vertices) of the graph, where each vertex has a set of properties that get reported. Most of the "edges" (relationships) of the graph are derived based on these properties by the topology stitching algorithm” [Datar ¶ 78]. “The configuration files are then directed to a topology stitcher job 550, and to a graph database connector 580” [Datar ¶ 47].
providing multiple stream/batch-processing components; “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “As shown in FIG. 6, a topology stitching pipeline may include receiving configuration data from a first set of datastores at stage 600, wherein configuration values for each batch for VM, OS, and application are processed to identify latest values, such as by evaluating the relevant timestamps, and as a second set of datastores at stage 605 to identify the latest configuration values at stage 610” [Datar ¶ 74, Fig. 6].
for each underlying management system, launching and initializing an inventory collector that collects inventory and configuration information from the underlying management system “For example, the computing system 300 may utilize available collectors to obtain configuration values for each of the domains for the layers of the stack. Collectors may be open source, provided by vendors, or otherwise available, and may include, e.g., APIs (application programming interfaces), SDKs (software development kits), or other tools such as Prometheus or Telegraf)” [Datar ¶ 29]. “Configuration changes are identified (which may be performed by, for example, using a SQL lag function) to obtain a last value of a column (in, for example, SQL or Spark SQL), and comparing this last value with a new value. Such a configuration change could be any of the events (a)-(e) identified above” [Datar ¶ 59]. “The multiple configuration collectors (inventory collectors) may be independently executable, with minimal dependency between entities or objects reported by a single collector” [Datar ¶ 39]. “The DSL can be used to translate and categorize an event that has occurred to one of the below events: (a) New entity; (b) Deleted entity; (c) New property for existing entity; (d) Deleted property for existing entity; or (e) Changed property for existing entity. On premise collectors may not be capable of deriving the delta, and thus the events will need to be derived at the backend server 350 for example” [Datar ¶ 51-57].
and launching and initializing an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information “Overwrite/Merge-In a first approach, FIG. 5A illustrates topology stitching including an overwrite or merge operation. As illustrated, configuration files are received (such as from a computing system 300) for an infrastructure stack including multiple domains, such as the illustrated application configuration 502, OS configuration 504, virtualization configuration 506, compute configuration 508, and storage configuration 509. The configuration files are then directed to a topology stitcher job 550, and to a graph database connector 580” [Datar ¶ 47]. “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/ update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60].
and uses the collected information to update the inventory and configuration information stored by the ICMDB. “The generated topology of a configuration stack may be maintained in the form of a view or "snapshot" that represents the complete topology at a particular time. In the first approach, the complete topology is stitched every time new snapshots are received, with the prior topology graph being deleted and replaced with the new topology graph. Alternatively, the nodes and edges of the topology snapshots may be overwritten and merged with the existing topology graph. In this manner, a static view of a full end-to-end stack is presented” [Datar ¶ 48]. “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/ update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “However, overwriting all relationships and nodes with a graph database may be very expensive in terms of processing. Further, topology changes may be infrequent, and thus the dynamic portion of the topology may be a small fraction of nodes and relationships for a system … As alternatives to the overwrite/merge approach to topology stitching, the following approaches allow incremental topology stitching” [Datar ¶ 49-50].
Datar fails to teach providing multiple component microservices, each providing a microservice API; providing an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices.
However, Krishnan teaches:
providing multiple component microservices, each providing a microservice API; “Also, a federated GraphQL approach may not in at least some circumstances offer a durable and/or persistent store of data itself, but rather may be layered on top of underlying network services (e.g., GraphQL APIs, REST APIs, and/or microservices) that may in turn use a database or other data store” [Krishnan ¶ 53]. “A federated GraphQL approach may be agnostic to the underlying database or microservice technologies used and may be used to create a unified graph layer on top of multiple underlying microservices (e.g., REST APIs, gRPC, etc.) that may in turn each use different database technologies” [Krishnan ¶ 53].
providing an event-stream-system-implemented central data bus “Processor (e.g., processing device) 1120 and memory 1122, which may comprise primary memory 1124 and secondary memory 1126, may communicate by way of a communication bus 1115, for example” [Krishnan ¶ 156].
accessed by one or more of the multiple component microservices “Devices, such as IoT-type devices, for example, may include computing resources embedded into hardware so as to facilitate and/or support a device's ability to acquire, collect, process and/or transmit content over one or more communications networks” [Krishnan ¶ 2].
Krishnan is considered to be analogous to the claimed invention because it is in the same field of database structures for information retrieval. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Datar to incorporate the teachings of Krishnan and include: providing multiple component microservices, each providing a microservice API; providing an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices. Doing so would allow for further optimization to the retrieval of information from different underlying management systems. “In a similar way, a federated approach may allow one to spread the implementation of entity types in a graph across multiple subgraphs where a GraphQL gateway can process a query and/or join entity fields together by dynamically creating a query plan at runtime to advantageously (e.g., optimally) fetch the entity fields from the respective subgraph API servers using entity keys” [Krishnan ¶ 53].
Datar in view of Krishnan fails to teach an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; for each underlying management system, launching and initializing an inventory collector … and publishes the collected information to the central data bus; and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus.
However, Tola teaches:
an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; “The system of FIG. 1 may be configured to operate in connection with an event bus or equivalent bus-based command and control system” [Tola ¶ 90]. “The virtual security kernel may also comprise one or more data modules, such as an event manager module for applying policies or to translate device events into an appropriate format for processing” [Tola ¶ 103].
for each underlying management system, launching and initializing an inventory collector “Applicant's system preferably starts by placing a small software agent on every device and may be comprised of any combination of internal/external Operating System (OS) software as well as circuit board/hardware component features depending on the embodiment” [Tola ¶ 72]. “In one embodiment, the device agent is installed as a driver or otherwise as a no-UI application. This allows the device agent to acquire and process packets outside of an operating system, and preferably in a low-level application” [Tola ¶ 86].
and publishes the collected information to the central data bus; “The LDC may be comprised of one or more databases, such as a domain database, a device agent, and any management platform, portal, or console sufficient to enable data interactions” [Tola ¶ 78]. “While each device agent is configured to immediately establish a secure connection with the associated LDC, the domain database of the LDC may also communicate and send updates (for example, via BUS commands) to other domain databases located outside the LDC, as shown in FIG. 1” [Tola ¶ 88].
an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information collected inventory and configuration information from the central data bus “In embodiments, the event bus is configured to send and receive data between local systems, or alternatively between known devices in a local system. In embodiments, an event manager distributes messages (command and control) to separately update the LDC database” [Tola ¶ 90].
Tola is considered to be analogous to the claimed invention because it is in the same field of database structures for information retrieval. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Datar in view of Krishnan to incorporate the teachings of Tola and include: an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; for each underlying management system, launching and initializing an inventory collector … and publishes the collected information to the central data bus; and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus. Doing so would allow for communication to be sent between the systems of the managed entities. “In embodiments, the event bus is configured to send and receive data between local systems, or alternatively between known devices in a local system. In embodiments, an event manager distributes messages (command and control) to separately update the LDC database” [Tola ¶ 90].
With regard to claim 17, Datar in view of Krishnan in view of Tola teaches the method of claim 1 as referenced above. Datar further teaches associating, by each inventory collector, information, collected from the underlying management system regarding a particular managed entity known to and/or managed by the underlying management system from which the inventory collector collects inventory and configuration information, with an entity ID that uniquely identifies the managed entity and the underlying management system. “The configuration data includes identification of entities that are within a layer and attributes of these identified entities” [Datar ¶ 21]. “The stitching metadata 365 may include information regarding entities of the multiple domains and attributes of such entities and rules based on which attributes can be matched to formulate relationships” [Datar ¶ 36]. “… for example, VM BIOS UUID from a virtualization layer collector is the same as host uuid from an OS collector. In this manner, the matching of attributes results in identifying sets of entities (which may be referred to as nodes) for which a relationship (which may also be referred to as an edge) should be created” [Datar ¶ 33].
With regard to claim 20, Datar teaches:
A data-storage device that stores processor instructions that, when executed by one or more processors of a meta-level management system, controls the meta-level management system to: “Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium, such as a non-transitory machine-readable medium, including instructions that, when performed by a machine, cause the machine to perform acts of the method, or of an apparatus or system for facilitating operations according to examples described herein” [Datar ¶ 90]. “In a complex system with multiple layers (also referred to herein as domains) in an infrastructure stack, there is a need for end-to-end visibility into alerts and issues. However, the full infrastructure stack includes multiple disparate layers of physical, virtualized, and clustered components. Each layer in the stack may provide visibility about the configuration of entities managed by that layer (underlying management system), but the data regarding each such layer may not provide a view of the full range of the stack. As a result, it is difficult to ascertain and evaluate the status of the full environment” [Datar ¶ 13]. “Examples described herein may provide for cross domain configuration extraction, topology mapping, and topology representation to provide application insights for layers of an infrastructure stack” [Datar ¶ 14].
provide a single, comprehensive, graph-database-based inventory-and-configuration-management database ("CICMDB"); “In terms of a graph database representation, the topology is a natural graph. Reported entities are reported as nodes (vertices) of the graph, where each vertex has a set of properties that get reported. Most of the "edges" (relationships) of the graph are derived based on these properties by the topology stitching algorithm” [Datar ¶ 78]. “The configuration files are then directed to a topology stitcher job 550, and to a graph database connector 580” [Datar ¶ 47].
provide multiple stream/batch-processing components; “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “As shown in FIG. 6, a topology stitching pipeline may include receiving configuration data from a first set of datastores at stage 600, wherein configuration values for each batch for VM, OS, and application are processed to identify latest values, such as by evaluating the relevant timestamps, and as a second set of datastores at stage 605 to identify the latest configuration values at stage 610” [Datar ¶ 74, Fig. 6].
for each underlying management system, launch and initialize an inventory collector that collects inventory and configuration information from the underlying management system “For example, the computing system 300 may utilize available collectors to obtain configuration values for each of the domains for the layers of the stack. Collectors may be open source, provided by vendors, or otherwise available, and may include, e.g., APIs (application programming interfaces), SDKs (software development kits), or other tools such as Prometheus or Telegraf)” [Datar ¶ 29]. “Configuration changes are identified (which may be performed by, for example, using a SQL lag function) to obtain a last value of a column (in, for example, SQL or Spark SQL), and comparing this last value with a new value. Such a configuration change could be any of the events (a)-(e) identified above” [Datar ¶ 59]. “The multiple configuration collectors (inventory collectors) may be independently executable, with minimal dependency between entities or objects reported by a single collector” [Datar ¶ 39]. “The DSL can be used to translate and categorize an event that has occurred to one of the below events: (a) New entity; (b) Deleted entity; (c) New property for existing entity; (d) Deleted property for existing entity; or (e) Changed property for existing entity. On premise collectors may not be capable of deriving the delta, and thus the events will need to be derived at the backend server 350 for example” [Datar ¶ 51-57].
and launch and initialize an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information “Overwrite/Merge-In a first approach, FIG. 5A illustrates topology stitching including an overwrite or merge operation. As illustrated, configuration files are received (such as from a computing system 300) for an infrastructure stack including multiple domains, such as the illustrated application configuration 502, OS configuration 504, virtualization configuration 506, compute configuration 508, and storage configuration 509. The configuration files are then directed to a topology stitcher job 550, and to a graph database connector 580” [Datar ¶ 47]. “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/ update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60].
and uses the collected information to update the inventory and configuration information stored by the CICMDB. “The generated topology of a configuration stack may be maintained in the form of a view or "snapshot" that represents the complete topology at a particular time. In the first approach, the complete topology is stitched every time new snapshots are received, with the prior topology graph being deleted and replaced with the new topology graph. Alternatively, the nodes and edges of the topology snapshots may be overwritten and merged with the existing topology graph. In this manner, a static view of a full end-to-end stack is presented” [Datar ¶ 48]. “The delta configuration streams together with a last snapshot 518 are received by the topology stitcher job 550. For the delta configuration, the topology stitcher job 550 presents a node add/update stream 562, a relationship add/ update stream 564, and a combination (or move) add/update stream 566 to the graph database connector 580” [Datar ¶ 60]. “However, overwriting all relationships and nodes with a graph database may be very expensive in terms of processing. Further, topology changes may be infrequent, and thus the dynamic portion of the topology may be a small fraction of nodes and relationships for a system … As alternatives to the overwrite/merge approach to topology stitching, the following approaches allow incremental topology stitching” [Datar ¶ 49-50].
Datar fails to teach provide multiple component microservices, each providing a microservice API; providing an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices.
However, Krishnan teaches:
providing multiple component microservices, each providing a microservice API; “Also, a federated GraphQL approach may not in at least some circumstances offer a durable and/or persistent store of data itself, but rather may be layered on top of underlying network services (e.g., GraphQL APIs, REST APIs, and/or microservices) that may in turn use a database or other data store” [Krishnan ¶ 53]. “A federated GraphQL approach may be agnostic to the underlying database or microservice technologies used and may be used to create a unified graph layer on top of multiple underlying microservices (e.g., REST APIs, gRPC, etc.) that may in turn each use different database technologies” [Krishnan ¶ 53].
provide an event-stream-system-implemented central data bus “Processor (e.g., processing device) 1120 and memory 1122, which may comprise primary memory 1124 and secondary memory 1126, may communicate by way of a communication bus 1115, for example” [Krishnan ¶ 156].
accessed by one or more of the multiple component microservices “Devices, such as IoT-type devices, for example, may include computing resources embedded into hardware so as to facilitate and/or support a device's ability to acquire, collect, process and/or transmit content over one or more communications networks” [Krishnan ¶ 2].
Krishnan is considered to be analogous to the claimed invention because it is in the same field of database structures for information retrieval. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Datar to incorporate the teachings of Krishnan and include: provide multiple component microservices, each providing a microservice API; provide an event-stream-system-implemented central data bus accessed by one or more of the multiple component microservices. Doing so would allow for further optimization to the retrieval of information from different underlying management systems. “In a similar way, a federated approach may allow one to spread the implementation of entity types in a graph across multiple subgraphs where a GraphQL gateway can process a query and/or join entity fields together by dynamically creating a query plan at runtime to advantageously (e.g., optimally) fetch the entity fields from the respective subgraph API servers using entity keys” [Krishnan ¶ 53].
Datar in view of Krishnan fails to teach an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; for each underlying management system, launch and initialize an inventory collector … and publishes the collected information to the central data bus; and an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information from the central data bus.
However, Tola teaches:
an event-stream-system-implemented central data bus accessed by … and one or more of the multiple streams/batch-processing components; “The system of FIG. 1 may be configured to operate in connection with an event bus or equivalent bus-based command and control system” [Tola ¶ 90]. “The virtual security kernel may also comprise one or more data modules, such as an event manager module for applying policies or to translate device events into an appropriate format for processing” [Tola ¶ 103].
for each underlying management system, launch and initialize an inventory collector “Applicant's system preferably starts by placing a small software agent on every device and may be comprised of any combination of internal/external Operating System (OS) software as well as circuit board/hardware component features depending on the embodiment” [Tola ¶ 72]. “In one embodiment, the device agent is installed as a driver or otherwise as a no-UI application. This allows the device agent to acquire and process packets outside of an operating system, and preferably in a low-level application” [Tola ¶ 86].
and publishes the collected information to the central data bus; “The LDC may be comprised of one or more databases, such as a domain database, a device agent, and any management platform, portal, or console sufficient to enable data interactions” [Tola ¶ 78]. “While each device agent is configured to immediately establish a secure connection with the associated LDC, the domain database of the LDC may also communicate and send updates (for example, via BUS commands) to other domain databases located outside the LDC, as shown in FIG. 1” [Tola ¶ 88].
an inventory-ingest stream/batch-processing component that receives collected inventory and configuration information collected inventory and configuration information from the central data bus “In embodiments, the event bus is configured to send and receive data between local systems, or alternatively between known devices in a local system. In embodiments, an event manager distributes messages (command and control) to separately update the LDC database” [Tola ¶ 90].
Tola is considered to be analogous to the claimed invention because it is in the same field of database structures for