DETAILED ACTION
1. Claims 1-20 are pending in this application.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
3. The information disclosure statement filed 11/20/2025 is in compliance with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609. It has been placed in the application file and the information referred to therein has been considered as to the merits.
Allowable Subject Matter
4. Claims 9-12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and § 103 (or as subject to pre-AIA 35 U.S.C. § 102 and § 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section § 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. § 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claims 1-3, 13-15 and 17-19 are rejected under 35 U.S.C. § 103 as being unpatentable over Delamare et al. (US 20240143594 A1) in view of Berkovitz et al. (US 12518021 B1).
As per claim 1, Delamare teaches a computer-implemented method for data storage, wherein the computer-implemented method comprises (i.e. “the method can be used for any graph analytics job, query, or algorithm and can be applied to any execution model.”; para. [0064]):
storing the attribute information into a disk managed (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]) corresponding to a database (i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table); and
further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]),
wherein the retrieval information in the memory is used to query the graph data (i.e. “A user command or graph processing operation, such as a graph query, is executed by a job 210 in the distributed graph engine.”; para. [0021]-[0023], [0045]. Further, i.e. “Graph processing is an important tool for data analytics. Graph processing engines usually tackle a variety of challenging workloads, including graph algorithms (e.g., PageRank) and graph queries, such as “find all persons that know ‘Alice’,” described in the following query (Query 1): [0030] SELECT id(p1), p1.name [0031] FROM MATCH (p1:person)-[e1:knows]->(p2:person) [0032] WHERE p2.name=‘Alice’”; para. [0029], [0064]).
However, it is noted that the prior art of Delamare does not explicitly teach “obtaining graph data to be stored, wherein the graph data comprise retrieval information and attribute information corresponding to a graph data object, and the retrieval information is used to perform retrieval query on the graph data;”
On the other hand, in the same field of endeavor, Berkovitz teaches obtaining graph data to be stored (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object),
wherein the graph data comprise retrieval information (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object information) and attribute information corresponding (i.e. “The object may be a resource, principal, a policy, and the like. A source may be a fetcher, an inspector, or an enricher. For example, a fetcher may provide information about a virtual machine, such as the machine name, address, name in namespace, and the like.”; fig. 5, Column 7, Lines 48-53; Examiner note: the attribute information is interpreted as the as the machine name, address, name in namespace, and the like) to a graph data object (i.e. “The node is stored as an object in the security graph on the graph database.”; fig. 5, Column 8, Lines 54-56; Examiner note: the graph data object is interpreted as the node), and
the retrieval information is used to perform retrieval query on the graph data (i.e. “By unifying user accounts under a single model (i.e., data schema) querying the graph database becomes simpler, as queries may be addressed regardless of which underlying infrastructure is being queried.”; Column 6, Lines 10-19. Further, i.e. “effectively generate a single query across multiple platforms (by querying the graph), rather than query each platform individually.”; fig. 6, Column 6, Lines 22-24);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Berkovitz that teaches a security graph is generated to present a unified view of cloud environments into the prior art of Delamare that teaches managing in-memory storage of graph components. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to use a Kubernetes® cluster, as it enables efficient, real-time demand fulfillment for resources (Delamare, Column 1, Lines 35-45).
As per claim 2, Delamare and Berkovitz teach all the limitations as discussed in claim 1 above.
Additionally, Delamare teaches wherein the memory managed by the storage engine comprises a memory object used to store graph data (i.e. “Individual elements of in-memory data structures may be referenced for access by, for example, using memory addresses or offsets that may be applied to memory addresses.”; fig. 7, para. [0034], [0147]-[0148]; Examiner note: the memory object used to store graph data is interpreted as the memory addresses or offsets); and
the further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine comprises (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]):
further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed).
As per claim 3, Delamare and Berkovitz teach all the limitations as discussed in claim 2 above.
Additionally, Delamare teaches wherein the graph data object comprises a node and an edge (i.e. “A graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; fig. 1, para. [0003]);
the graph data comprise node data corresponding to the node and edge data corresponding to the edge (i.e. “Note that the forward edge and its reverse version are different machines due to the distributed aspect if and only if the source and destination are on different machines, such as with edge 111 in FIG. 1 where a forward edge would be stored in machine 110 or a reverse edge would be stored in machine 120.”; figs. 1-4, para. [0059]. Further, i.e. “The node contains a property id (prop_id) that can be used for requesting the properties.”; para. [0077]);
the node data comprise node retrieval information and node attribute information (i.e. “Properties are accessed during filter evaluation and property selection (e.g., v2.prop3 above).” and “The (214) 768-8878 a property id (prop_id) that can be used for requesting the properties.”; fig.1, para. [0003], [0077], [0082]); and
the edge data comprise edge retrieval information and edge attribute information (i.e. “[0083] Match “likes” edges. [0084] Match “movie” vertices. [0085] Match “recorded_in” edges. [0086] Match “city” vertices. [0087] Match “belongs_to” edges.”; fig. 1, para. [0059]-[0060], [0083]-[0087]).
As per claim 13, Delamare teaches a computer-implemented device comprising (i.e. “a computer system”; fig. 9, para. [0152]-[0153]):
one or more processors (i.e. “Computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904.”; fig. 9, para. [0155]); and
one or more tangible, non-transitory, machine-readable media storing one or more instructions that, when executed by the one or more processors, perform one or more operations comprising (i.e. “Computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904.”; fig. 9, para. [0155]):
storing the attribute information into a disk managed (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]) by a storage engine corresponding to a database (i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table); and
further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]),
wherein the retrieval information in the memory is used to query the graph data (i.e. “A user command or graph processing operation, such as a graph query, is executed by a job 210 in the distributed graph engine.”; para. [0021]-[0023], [0045]. Further, i.e. “Graph processing is an important tool for data analytics. Graph processing engines usually tackle a variety of challenging workloads, including graph algorithms (e.g., PageRank) and graph queries, such as “find all persons that know ‘Alice’,” described in the following query (Query 1): [0030] SELECT id(p1), p1.name [0031] FROM MATCH (p1:person)-[e1:knows]->(p2:person) [0032] WHERE p2.name=‘Alice’”; para. [0029], [0064]).
However, it is noted that the prior art of Delamare does not explicitly teach “obtaining graph data to be stored, wherein the graph data comprise retrieval information and attribute information corresponding to a graph data object, and the retrieval information is used to perform retrieval query on the graph data;”
On the other hand, in the same field of endeavor, Berkovitz teaches obtaining graph data to be stored (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object),
wherein the graph data comprise retrieval information (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object information) and attribute information corresponding (i.e. “The object may be a resource, principal, a policy, and the like. A source may be a fetcher, an inspector, or an enricher. For example, a fetcher may provide information about a virtual machine, such as the machine name, address, name in namespace, and the like.”; fig. 5, Column 7, Lines 48-53; Examiner note: the attribute information is interpreted as the as the machine name, address, name in namespace, and the like) to a graph data object (i.e. “The node is stored as an object in the security graph on the graph database.”; fig. 5, Column 8, Lines 54-56; Examiner note: the graph data object is interpreted as the node), and
the retrieval information is used to perform retrieval query on the graph data (i.e. “By unifying user accounts under a single model (i.e., data schema) querying the graph database becomes simpler, as queries may be addressed regardless of which underlying infrastructure is being queried.”; Column 6, Lines 10-19. Further, i.e. “effectively generate a single query across multiple platforms (by querying the graph), rather than query each platform individually.”; fig. 6, Column 6, Lines 22-24);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Berkovitz that teaches a security graph is generated to present a unified view of cloud environments into the prior art of Delamare that teaches managing in-memory storage of graph components. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to use a Kubernetes® cluster, as it enables efficient, real-time demand fulfillment for resources (Delamare, Column 1, Lines 35-45).
As per claim 14, Delamare and Berkovitz teach all the limitations as discussed in claim 13 above.
Additionally, Delamare teaches wherein the memory managed by the storage engine comprises a memory object used to store graph data (i.e. “Individual elements of in-memory data structures may be referenced for access by, for example, using memory addresses or offsets that may be applied to memory addresses.”; fig. 7, para. [0034], [0147]-[0148]; Examiner note: the memory object used to store graph data is interpreted as the memory addresses or offsets); and
the further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine comprises (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]):
further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed).
As per claim 15, Delamare and Berkovitz teach all the limitations as discussed in claim 14 above.
Additionally, Delamare teaches wherein the graph data object comprises a node and an edge (i.e. “A graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; fig. 1, para. [0003]);
the graph data comprise node data corresponding to the node and edge data corresponding to the edge (i.e. “Note that the forward edge and its reverse version are different machines due to the distributed aspect if and only if the source and destination are on different machines, such as with edge 111 in FIG. 1 where a forward edge would be stored in machine 110 or a reverse edge would be stored in machine 120.”; figs. 1-4, para. [0059]. Further, i.e. “The node contains a property id (prop_id) that can be used for requesting the properties.”; para. [0077]);
the node data comprise node retrieval information and node attribute information (i.e. “Properties are accessed during filter evaluation and property selection (e.g., v2.prop3 above).” and “The (214) 768-8878 a property id (prop_id) that can be used for requesting the properties.”; fig.1, para. [0003], [0077], [0082]); and
the edge data comprise edge retrieval information and edge attribute information (i.e. “[0083] Match “likes” edges. [0084] Match “movie” vertices. [0085] Match “recorded_in” edges. [0086] Match “city” vertices. [0087] Match “belongs_to” edges.”; fig. 1, para. [0059]-[0060], [0083]-[0087]).
As per claim 17, Delamare teaches a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising (i.e. “Computer system 900 also includes a main memory 906, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 902 for storing information and instructions to be executed by processor 904. Main memory 906 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 904. Such instructions, when stored in non-transitory storage media accessible to processor 904, render computer system 900 into a special-purpose machine that is customized to perform the operations specified in the instructions.”; fig. 9, para. [0154]):
storing the attribute information into a disk managed by a storage engine (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]) corresponding to a database (i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table); and
further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]),
wherein the retrieval information in the memory is used to query the graph data (i.e. “A user command or graph processing operation, such as a graph query, is executed by a job 210 in the distributed graph engine.”; para. [0021]-[0023], [0045]. Further, i.e. “Graph processing is an important tool for data analytics. Graph processing engines usually tackle a variety of challenging workloads, including graph algorithms (e.g., PageRank) and graph queries, such as “find all persons that know ‘Alice’,” described in the following query (Query 1): [0030] SELECT id(p1), p1.name [0031] FROM MATCH (p1:person)-[e1:knows]->(p2:person) [0032] WHERE p2.name=‘Alice’”; para. [0029], [0064]).
However, it is noted that the prior art of Delamare does not explicitly teach “obtaining graph data to be stored, wherein the graph data comprise retrieval information and attribute information corresponding to a graph data object, and the retrieval information is used to perform retrieval query on the graph data;”
On the other hand, in the same field of endeavor, Berkovitz teaches obtaining graph data to be stored (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object),
wherein the graph data comprise retrieval information and attribute information corresponding (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object information) to a graph data object (i.e. “At S510, object information relating to an object is received from a source.”; fig. 5, Column 7, Lines 48-50; Examiner note: the graph data is interpreted as the object information), and
the retrieval information is used to perform retrieval query on the graph data (i.e. “By unifying user accounts under a single model (i.e., data schema) querying the graph database becomes simpler, as queries may be addressed regardless of which underlying infrastructure is being queried.”; Column 6, Lines 10-19. Further, i.e. “effectively generate a single query across multiple platforms (by querying the graph), rather than query each platform individually.”; fig. 6, Column 6, Lines 22-24);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Berkovitz that teaches a security graph is generated to present a unified view of cloud environments into the prior art of Delamare that teaches managing in-memory storage of graph components. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to use a Kubernetes® cluster, as it enables efficient, real-time demand fulfillment for resources (Delamare, Column 1, Lines 35-45).
As per claim 18, Delamare and Berkovitz teach all the limitations as discussed in claim 17 above.
Additionally, Delamare teaches wherein the memory managed by the storage engine comprises a memory object used to store graph data (i.e. “Individual elements of in-memory data structures may be referenced for access by, for example, using memory addresses or offsets that may be applied to memory addresses.”; fig. 7, para. [0034], [0147]-[0148]; Examiner note: the memory object used to store graph data is interpreted as the memory addresses or offsets); and
the further storing the retrieval information corresponding to the graph data object and the storage location into a memory managed by the storage engine comprises (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]):
further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed).
As per claim 19, Delamare and Berkovitz teach all the limitations as discussed in claim 18 above.
Additionally, Delamare teaches wherein the graph data object comprises a node and an edge (i.e. “A graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; fig. 1, para. [0003]);
the graph data comprise node data corresponding to the node and edge data corresponding to the edge (i.e. “Note that the forward edge and its reverse version are different machines due to the distributed aspect if and only if the source and destination are on different machines, such as with edge 111 in FIG. 1 where a forward edge would be stored in machine 110 or a reverse edge would be stored in machine 120.”; figs. 1-4, para. [0059]. Further, i.e. “The node contains a property id (prop_id) that can be used for requesting the properties.”; para. [0077]);
the node data comprise node retrieval information and node attribute information (i.e. “Properties are accessed during filter evaluation and property selection (e.g., v2.prop3 above).” and “The (214) 768-8878 a property id (prop_id) that can be used for requesting the properties.”; fig.1, para. [0003], [0077], [0082]); and
the edge data comprise edge retrieval information and edge attribute information (i.e. “[0083] Match “likes” edges. [0084] Match “movie” vertices. [0085] Match “recorded_in” edges. [0086] Match “city” vertices. [0087] Match “belongs_to” edges.”; fig. 1, para. [0059]-[0060], [0083]-[0087]).
7. Claims 4-5, 16 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Delamare et al. (US 20240143594 A1) in view of Berkovitz et al. (US 12518021 B1) in further view of Frank et al. (US 20190155524 A1).
As per claim 4, Delamare and Berkovitz teach all the limitations as discussed in claim 3 above.
Additionally, Delamare teaches the storing the attribute information into a disk managed by a storage engine corresponding to the database (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]. Further, i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk comprises (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table):
separately storing the node attribute information (i.e. “A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; figs. 3-4, para. [0003], [0077]) and the edge attribute information into the disk managed by the storage engine (i.e. “Edge properties are stored alongside the edges in columns.”; figs. 3-4, para. [0059]), and
obtaining a first storage location of the node attribute information on the disk and a second storage location of the edge attribute information on the disk (i.e. “There is one array per property, each with one entry per vertex. FIG. 3 depicts an example of a vertex table in accordance with an embodiment. Here, the vertex ID is used internally to reference a vertex, while the external vertex key is used by the user to reference a vertex. Both are unique.”; figs. 3-4, para. [0057]; Examiner note: as illustrated in figures 3-4 each properties have a location); and
the further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine comprises (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed):
storing the node retrieval information corresponding to the node and the first storage location into the first memory object, and storing the edge retrieval information corresponding to the edge and the second storage location into the second memory object (i.e. “One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; para. [0061]-[0062]; Examiner note: the array is storing nodes and edges information).
However, it is noted that the combination of the prior arts of Delamare and Berkovitz do not explicitly teach “wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine;”
On the other hand, in the same field of endeavor, Frank teaches wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine (i.e. “a page fault and page handler that controls what portion of the object address space is currently visible in each node's physical address space and coordinating the relationship between memory objects and application segments and files.”; fig. 6, para. [0373]. Further, “However, the inter-node object router 615 of the object memory fabric 600 utilizes a memory fabric object address (OA) which specifies the object and specific block of the object.”; fig. 6, para. [0134]);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Frank that teaches a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric into the combination of the prior arts of Delamare that teaches managing in-memory storage of graph components, and Berkovitz that teaches a security graph is generated to present a unified view of cloud environments. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to changing the way in which processing, memory, storage, network, and cloud computing, are managed because it can significantly improve the efficiency and performance of commodity hardware (Frank, para. [0019]).
As per claim 5, Delamare, Berkovitz and Frank teach all the limitations as discussed in claim 4 above.
Additionally, Delamare teaches wherein the memory object is an array (i.e. “In-memory data structures include, for example, arrays, segmented arrays, and hash tables. Individual elements of in-memory data structures may be referenced for access by, for example, using memory addresses or offsets that may be applied to memory addresses.”; fig. 1, para. [0034], [0152]);
an array subscript of a first array predefined in the memory managed by the storage engine and corresponding to the node is a node identifier of the node (i.e. “tables are stored in two separate arrays of tables. One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; figs. 3-4, para. [0061]; Examiner note: the vertex array data can be view as the array subscript of the first array predefined in the memory, see fig. 4);
an array subscript of a second array predefined in the memory managed by the storage engine and corresponding to the edge is a node identifier of a start node of the edge (i.e. “tables are stored in two separate arrays of tables. One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; figs. 3-4, para. [0061]; Examiner note: the edge array data can be view as the array subscript of the array subscript of the second array predefined in the memory, see fig. 4); and
a node identifier of a specified node is a numeric ID obtained after normalization processing is performed on identification data corresponding to the specified node (i.e. “an internal ID: vertex_id and edge_id.”; fig. 4, para. [0061]; Examiner note: the node identifier of a specified node is a numeric ID obtained after normalization processing is performed on identification data corresponding to the specified node is interpreted as the vertex_id); and
the storing the node retrieval information corresponding to the node and the first storage location into the first memory object, and storing the edge retrieval information corresponding to the edge and the second storage location into the second memory object comprises (i.e. “One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; para. [0061]-[0062]; Examiner note: the array is storing nodes and edges information):
storing the node retrieval information corresponding to the node and the first storage location into the first array in the memory managed by the storage engine and corresponding to the node, and storing the edge retrieval information corresponding to the edge and the second storage location into the second array in the memory managed by the storage engine and corresponding to the edge, wherein the first storage location and the second storage location are numeric IDs obtained after normalization processing is performed (i.e. “One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; para. [0061]-[0062]; Examiner note: the array is storing nodes and edges information; the IDs of nodes/vertex as illustrated in figure 4 is in numerical format).
As per claim 16, Delamare and Berkovitz teach all the limitations as discussed in claim 15 above.
Additionally, Delamare teaches the storing the attribute information into a disk managed by a storage engine corresponding to the database (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]. Further, i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk comprises (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table):
separately storing the node attribute information (i.e. “A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; figs. 3-4, para. [0003], [0077]) and the edge attribute information into the disk managed by the storage engine (i.e. “Edge properties are stored alongside the edges in columns.”; figs. 3-4, para. [0059]), and
obtaining a first storage location of the node attribute information on the disk and a second storage location of the edge attribute information on the disk (i.e. “There is one array per property, each with one entry per vertex. FIG. 3 depicts an example of a vertex table in accordance with an embodiment. Here, the vertex ID is used internally to reference a vertex, while the external vertex key is used by the user to reference a vertex. Both are unique.”; figs. 3-4, para. [0057]; Examiner note: as illustrated in figures 3-4 each properties have a location); and
the further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine comprises (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed):
storing the node retrieval information corresponding to the node and the first storage location into the first memory object, and storing the edge retrieval information corresponding to the edge and the second storage location into the second memory object (i.e. “One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; para. [0061]-[0062]; Examiner note: the array is storing nodes and edges information).
However, it is noted that the combination of the prior arts of Delamare and Berkovitz do not explicitly teach “wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine;”
On the other hand, in the same field of endeavor, Frank teaches wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine (i.e. “a page fault and page handler that controls what portion of the object address space is currently visible in each node's physical address space and coordinating the relationship between memory objects and application segments and files.”; fig. 6, para. [0373]. Further, “However, the inter-node object router 615 of the object memory fabric 600 utilizes a memory fabric object address (OA) which specifies the object and specific block of the object.”; fig. 6, para. [0134]);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Frank that teaches a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric into the combination of the prior arts of Delamare that teaches managing in-memory storage of graph components, and Berkovitz that teaches a security graph is generated to present a unified view of cloud environments. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to changing the way in which processing, memory, storage, network, and cloud computing, are managed because it can significantly improve the efficiency and performance of commodity hardware (Frank, para. [0019]).
As per claim 20, Delamare and Berkovitz teach all the limitations as discussed in claim 19 above.
Additionally, Delamare teaches the storing the attribute information into a disk managed by a storage engine corresponding to the database (i.e. “The storage manager moves the identified graph components to disk (persistent storage) (block 803) and updates the storage location (block 804).”; fig. 8, para. [0150]. Further, i.e. “A database management system (DBMS) manages a database.”; para. [0176]-[0177]), and
obtaining a storage location of the attribute information on the disk comprises (i.e. “For example, metadata in a database dictionary defining a database table may specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table.”; para. [0196]; Examiner note: the obtaining a storage location of the attribute information on the disk is interpreted as the specify the attribute names and data types of the attributes, and one or more files or portions thereof that store data for the table):
separately storing the node attribute information (i.e. “A graph relates data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes.”; figs. 3-4, para. [0003], [0077]) and the edge attribute information into the disk managed by the storage engine (i.e. “Edge properties are stored alongside the edges in columns.”; figs. 3-4, para. [0059]), and
obtaining a first storage location of the node attribute information on the disk and a second storage location of the edge attribute information on the disk (i.e. “There is one array per property, each with one entry per vertex. FIG. 3 depicts an example of a vertex table in accordance with an embodiment. Here, the vertex ID is used internally to reference a vertex, while the external vertex key is used by the user to reference a vertex. Both are unique.”; figs. 3-4, para. [0057]; Examiner note: as illustrated in figures 3-4 each properties have a location); and
the further storing the retrieval information corresponding to the graph data object and the storage location into the memory object defined in the memory managed by the storage engine comprises (i.e. “The job requests the storage manager to load data objects required for the job into memory (block 703). If the storage manager determines that memory is needed to load the data objects into memory (block 704).”; fig. 7, para. [0147]; Examiner notes: load data object into a memory is known as to store the data object into the memory address(s). Figure 7 further illustrates how the load data object is performed):
storing the node retrieval information corresponding to the node and the first storage location into the first memory object, and storing the edge retrieval information corresponding to the edge and the second storage location into the second memory object (i.e. “One array is used to store the edge tables and another is used to store the vertex tables. Each table has a table ID, which corresponds to the index in the respective table array. To ensure efficient access, entities are not referred by their keys but by an internal ID: vertex_id and edge_id. The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id). The local_id corresponds to the index of the entity in its table on its machine, so it is continuous.”; para. [0061]-[0062]; Examiner note: the array is storing nodes and edges information).
However, it is noted that the combination of the prior arts of Delamare and Berkovitz do not explicitly teach “wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine;”
On the other hand, in the same field of endeavor, Frank teaches wherein a first memory object corresponding to the node and a second memory object corresponding to the edge are predefined in the memory managed by the storage engine (i.e. “a page fault and page handler that controls what portion of the object address space is currently visible in each node's physical address space and coordinating the relationship between memory objects and application segments and files.”; fig. 6, para. [0373]. Further, “However, the inter-node object router 615 of the object memory fabric 600 utilizes a memory fabric object address (OA) which specifies the object and specific block of the object.”; fig. 6, para. [0134]);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Frank that teaches a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric into the combination of the prior arts of Delamare that teaches managing in-memory storage of graph components, and Berkovitz that teaches a security graph is generated to present a unified view of cloud environments. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to changing the way in which processing, memory, storage, network, and cloud computing, are managed because it can significantly improve the efficiency and performance of commodity hardware (Frank, para. [0019]).
8. Claims 6-8 are rejected under 35 U.S.C. § 103 as being unpatentable over Delamare et al. (US 20240143594 A1) in view of Berkovitz et al. (US 12518021 B1) in further view of Frank et al. (US 20190155524 A1) still in further view of Jorapur et al. (US 20210406130 A1).
As per claim 6, Delamare, Berkovitz and Frank teach all the limitations as discussed in claim 5 above.
Additionally, Delamare teaches wherein: the first array comprises a first array element and a second array element, the first array element is used to store the node retrieval information corresponding to the node, and the second array element is used to store the first storage location (i.e. the vertex array illustrated in figure 4 contain a first element “0” and a second element “0”. Further, i.e. “A vertex table stores the unique external key of the vertices, which is used by the user to refer to each vertex, and the properties in arrays.”; figs. 3-4, para. [0057]; Examiner note: vertex herein can be view as node; the figure illustrated the second “0” element pointed to an edge ID which can be refer as the second array element is used to store the first storage location);
the second array comprises at least one third array element respectively corresponding to at least one edge connected to the start node, and the third array element is used to store the edge retrieval information and the second storage location (i.e. “The internal ID is a 64-bit value composed of the machine ID, the table ID, and the local ID (local_id).”; figs. 3-4, para. [0061]-[0062]; Examiner note: the least one third array element is interpreted as the local ID (local_id));
However, it is noted that the combination of the prior arts of Delamare, Berkovitz and Frank do not explicitly teach “the first storage location comprises a file identifier of a first file on the disk and used to store the node attribute information of the node and an offset of the node attribute information in the first file; the second storage location comprises a file identifier of a second file on the disk and used to store the edge attribute information of the edge and an offset of the edge attribute information in the second file; a file identifier is a numeric ID obtained after the normalization processing is performed; and an offset is a numeric ID obtained after the normalization processing is performed.”
On the other hand, in the same field of endeavor, Jorapur teaches the first storage location comprises a file identifier of a first file on the disk and used to store the node attribute information of the node (i.e. “For example, “1” is a data key that may be used to lookup “DATA1” of leaf node 222.”; fig. 2D, para. [0056]) and an offset of the node attribute information in the first file (i.e. “data structure 450 is configured to associate a chunk file identifier with a chunk identifier, a chunk file offset, a storage node, and a primary owner.”; fig. 4B, para. [0104]);
the second storage location comprises a file identifier of a second file on the disk and used to store the edge attribute information of the edge and an offset of the edge attribute information in the second file (i.e. “Leaf nodes 222, 224, 226, 228, 230 respectively store the data key-value pairs of “1: DATA1,” “2:DATA2,” “3:DATA3,” “6:DATA6,” and 11:DATA11.” Leaf nodes 222, 224, 226, 228, 230 respectively have NodeIDs of “L1,” “L2,” “L3,” “L4,” and “L5.” Each of the leaf nodes 222, 224, 226, 228, 230 have TreeIDs of “1.” Leaf nodes 222, 224, 226, 228, 230 may store metadata, content file data when the size of the content file is less than or equal to a limit size, or a pointer to or an identifier of a file metadata structure.”; fig. 2A, para. [0063]; Examiner note: the 2 in the DATA2 can be interpreted as the file identifier of a second file on the disk; the offset of the edge attribute information in the second file can be interpreted as the TreeID of DATA2);
a file identifier is a numeric ID obtained after the normalization processing is performed (i.e. the DATA2 identifier number 2 is in numeric format, see fig. 2A); and
an offset is a numeric ID obtained after the normalization processing is performed (i.e. the TreeIDs as illustrated in figure 2A and described in para. [0063] is a numeric ID).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Jorapur that teaches a virtual machine disk image file backup is selected among a plurality of virtual machine disk image file backups stored on a backup storage based on a backup update policy into the combination of the prior arts of Delamare that teaches managing in-memory storage of graph components, Berkovitz that teaches a security graph is generated to present a unified view of cloud environments, and Frank that teaches a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to running a database application and providing responses to a plurality of data queries because it can provide a faster access time to data (Jorapur, para. [0019], [0075]).
As per claim 7, Delamare, Berkovitz, Frank and Jorapur teach all the limitations as discussed in claim 6 above.
Additionally, Delamare teaches wherein: the node retrieval information comprises a label identifier of a node label (i.e. “The node contains a property id (prop_id) that can be used for requesting the properties.”; fig. 1, para. [0034], [0062]; Examiner note: it is also notice in figure 1 that each node has a name which can be used to label the node);
the edge retrieval information comprises a label identifier of an edge label (i.e. “in the example of FIG. 1, one vertex table for “Person” and one vertex table for “Account” are needed, and one edge table for “transaction” is needed. Vertex or edge tables are typically denoted with vertex and edge labels, respectively.”; fig. 1, para. [0034], [0076]);
the first array element is used to store a label identifier of a node label corresponding to the node (i.e. vertex ID in the illustrated in figures 3-4 can be refer as the node label corresponding to the node; figs. 1-4, para. [0034], [0076]); and
the third array element is used to store a label identifier of an edge label corresponding to the edge and the second storage location (i.e. edge ID in the illustrated in figures 3-4 can be refer as the edge label corresponding to the edge and the second storage location; figs. 1-4, para. [0034], [0076]).
As per claim 8, Delamare, Berkovitz, Frank and Jorapur teach all the limitations as discussed in claim 7 above.
Additionally, Delamare teaches the storing the edge attribute information into the disk managed by the storage engine comprises (i.e. “As shown in FIG. 5A, the storage manager stores metadata for each graph component, including the name of the graph component (e.g., “p.csr”), a usage_counter value (e.g., 1 for p.csr), a size of the graph component (e.g., 6 GB), and a memory_state value indicating whether the graph component is in memory or persistent storage (e.g., “mem” for p.csr).”; figs. 5A-C, para. [0066]-[0069], [0075]):
However, it is noted that the combination of the prior arts of Delamare, Berkovitz and Frank do not explicitly teach “wherein the disk stores multiple second files respectively corresponding to different edge labels; and storing the edge attribute information into a second file on the disk managed by the storage engine and corresponding to the edge label.”
On the other hand, in the same field of endeavor, Jorapur teaches wherein the disk stores multiple second files respectively corresponding to different edge labels (i.e. “Storage system 112 may store a plurality of virtual machine disk image file backups (e.g., hundreds, thousands, etc.).”; fig. 1, para. [0038]); and
storing the edge attribute information into a second file on the disk managed by the storage engine and corresponding to the edge label (i.e. “The pointer to leaf node 224 indicates that traversing tree data structure 200 from intermediate node 212 to leaf node 224 will lead to the node with a data key of “2.” The pointer to leaf node 226 indicates that traversing tree data structure 200 from intermediate node 212 to leaf node 226 will lead to the node with a data key of “3.””; fig. 2A-C, para. [0061]; Examiner note; where the pointer herein can be view as an edge and its information is stored in the second data structure).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Jorapur that teaches a virtual machine disk image file backup is selected among a plurality of virtual machine disk image file backups stored on a backup storage based on a backup update policy into the combination of the prior arts of Delamare that teaches managing in-memory storage of graph components, Berkovitz that teaches a security graph is generated to present a unified view of cloud environments, and Frank that teaches a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Additionally, it is coupled with efficient serialization and deserialization methods for the graph components, which further improves performance.
The motivation for doing so would be to running a database application and providing responses to a plurality of data queries because it can provide a faster access time to data (Jorapur, para. [0019], [0075]).
Prior Art of Record
9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Karlberg et al. (US 20250231937 A1), teaches a graph storage data infrastructure to store various types of data for and/or about their customers.
Li et al. (US 20250209090 A1), teaches graph data partitioning methods and apparatuses.
Conclusion
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTONIO CAIA DO whose telephone number is (469)295-9251. The examiner can normally be reached on Monday - Friday / 06:30 to 16:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ng, Amy can be reached on (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTONIO J CAIA DO/
Examiner, Art Unit 2164