DETAILED ACTION
This communication is in response to the Office action filed 11/24/2025. Claims 1-20 are pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 11/24/2025 have been fully considered.
Regarding the arguments on pages 11-12 in relating to the amended limitations “a plurality of iterations of”, the identifying and compressing steps, please see the newly cited paragraphs below.
See Krishnamurthy et al., the abstract: data structures are created which identify the components and their dependencies on one other. To avoid excessive overhead costs, redundant dependencies are identified. When repeated instances of a dependency are detected, the associated dependency data structure can be augmented with correlation data of the repeated instances, such as transaction identifiers and sequence identifiers.
para. 53-56: when adding the boundaries of a transaction to the lightweight stack, it will be very likely that many traces are identical to each other in their dependency structure. To remove these redundant traces, we can make an equivalent string representation of the dependency data structure using a concatenation of the two boundary component names (e.g., A -> B becomes string AB). This string representation and the dependency data structure itself are stored as key-value pairs in a hash map. When new dependencies are detected, we check for a redundancy with pre-existing string entries in the hash map, and if such a redundancy/match is detected, we do not add this newly created dependency data structure to the hash map; para. 62: determines if an additional instance of a dependency has occurred. Different instances of a dependency A->B, for instance, may occur multiple times, where each occurrence has a different GUID. Having one dependency data structure with a list of different GUIDs allows us to reduce the amount of data that is reported to the manager; para. 81: identifying all unique edges/transitions between components.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnamurthy et al. (US 20100281488) in view of Covell (US 8712930) and further in view of Fu (US 20210103930) and Algotar (US 20170177739).
As per claims 1, 11, 16, Krishnamurthy et al. (US 20100281488) teaches
a system, comprising: a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: receiving a graph that is associated with a group of datasets corresponding to a plurality of transactions, wherein each dataset in the group of datasets comprises a set of nodes representing features of a corresponding transaction from the plurality of transactions (para. 12: receiving dependency data structures/datasets at a manager, the dependency data structures include identifiers of components of the application which execute in different threads of a transaction, and an indication of caller-callee relationships among the components, preparing graph data based on the dependency data structures, the graph data is based on the identifiers of the components and the indication of the caller-callee relationships; the abstract: agents report the dependency data structures to a manager to provide graph data in a user interface; para. 89-90: receiving dependency data structures from the agents. Step 702 includes generating graph data from the dependency data structures, and step 704 includes storing the graph data, such as in a database. Further, the graph data can be updated as additional dependency data structures are received from time to time; para. 93-94),
wherein each node of the set of nodes is labeled that indicates an order to access a corresponding node in the set of nodes (para. 51-52: a simple dependency data structure can be maintained that describes a dependency between two components. A dependency data structure may represent a set of two or more nodes with a transition or edge between them, or just a single node; para. 101: with the "process transfer" sub-node selected by the user in the tree region, the results region depicts, in box 908, the client-side process titled "processTransfer," an average response, and the URL of the associated client-side service);
generating one or more descriptions that describe the plurality of transactions based on a sequence of nodes in the set of nodes for the corresponding transaction, wherein each node in the sequence of nodes is connected based on its respective label (para. 94: the tree region identifies a hierarchy of nodes which can be expanded and collapsed to see additional nodes or fewer nodes, respectively, and to control the graph data which is accessed for display in the region 806. "Super Domain" is the top node; para. 101: the server-side process of box 910 is dependent on the client-side processes "creditCreditBalance" and "getCreditBalance" in box 912. Thus, dependencies of multiple processes or classes of one service are identified. Box 914 indicates that the server-side processes "creditCreditBalance" and "getCreditBalance" are depended on by the like-named respective client-side processes);
generating a hash map based on the one or more descriptions, wherein the generating the hash map comprises a plurality of iterations of: identifying a plurality of instances from the one or more descriptions, wherein each instance of the plurality of instances comprises a substructure including two or more nodes connected via at least one edge (abstract: data structures are created which identify the components and their dependencies on one other. To avoid excessive overhead costs, redundant dependencies are identified. When repeated instances of a dependency are detected, the associated dependency data structure can be augmented with correlation data of the repeated instances, such as transaction identifiers and sequence identifiers; para. 53-56: when adding the boundaries of a transaction to the lightweight stack, it will be very likely that many traces are identical to each other in their dependency structure. To remove these redundant traces, we can make an equivalent string representation of the dependency data structure using a concatenation of the two boundary component names (e.g., A -> B becomes string AB). This string representation and the dependency data structure itself are stored as key-value pairs in a hash map. When new dependencies are detected, we check for a redundancy with pre-existing string entries in the hash map, and if such a redundancy/match is detected, we do not add this newly created dependency data structure to the hash map; para. 62: determines if an additional instance of a dependency has occurred. Different instances of a dependency A->B, for instance, may occur multiple times, where each occurrence has a different GUID. Having one dependency data structure with a list of different GUIDs allows us to reduce the amount of data that is reported to the manager; para. 81: identifying all unique edges/transitions between components);
for each instance of the plurality of instances: generating a first key and a first value based on a first hash for the instance, wherein the first key for the instance comprises a first description of the instance, and wherein the first value for the instance comprises a first identifier for the instance (para. 53-55: when adding the boundaries of a transaction to the lightweight stack, it will be very likely that many traces are identical to each other in their dependency structure. To remove these redundant traces, we can make an equivalent string representation of the dependency data structure using a concatenation of the two boundary component names (e.g., A->B becomes string AB). This string representation and the dependency data structure itself are stored as key-value pairs in a hash map. When new dependencies are detected, we check for a redundancy with pre-existing string entries in the hash map, and if such a redundancy/match is detected, we do not add this newly created dependency data structure to the hash map; para. 62, 72: regarding the correlation data, in addition to sending CorrId and SeqId, this solution adds another key-value pair (DependencyFlag, true or false). Whenever an agent decides that a dependency is unique and sends the dependency structure to the manager, it sets the dependency flag key value to true in the request header);
identifying, based on the first keys and the first values generated for the plurality of instances, a subset of substructures that meet one or more specified criteria (abstract: data structures are created which identify the components and their dependencies on one other. To avoid excessive overhead costs, redundant dependencies are identified. When repeated instances of a dependency are detected, the associated dependency data structure can be augmented with correlation data of the repeated instances, such as transaction identifiers and sequence identifiers – See fig. 4B, items 412, 416; fig. 5B, items 514, 518: compare created dependency data structure to dependency data structures in cache, discard created dependency data structure; fig. 6B: Compare created dependency data structure to dependency data
structures in cache, if match (equivalent to the specified criteria), determine if name of calling component in combination with created dependency data structure is new, if not new, discard created the subgraph/dependency data structure); para. 71: we create dependency structures from the lightweight stack, and use a redundancy removal process to determine if a matching dependency structure already exists in the agent thread cache. If a match exists, we check the incoming request headers to see if the
dependency flag is true or false. If it is true, the agent sends the dependency structure to the manager. If it is false, we discard the dependency structure without sending it to the manager);
compressing the graph by at least partially removing nodes or edges that are associated with the subset of substructures, wherein the compressed graph is used to identify the plurality of instances in a subsequent iteration of the
plurality of iterations (para. 53-55: when adding the boundaries of a transaction to the lightweight stack, it will be very likely that many traces are identical to each other in their dependency structure. To remove these redundant traces, we can make an equivalent string representation of the dependency data structure using a concatenation of the two boundary component names (e.g., A -> B becomes string AB). This string representation and the dependency data structure itself are stored as key-value pairs in a hash map. When new dependencies are detected, we check for a redundancy with pre-existing string entries in the hash map, and if such a redundancy/match is detected, we do not add this newly created dependency data structure to the hash map. Thus, the size of the associated data structure/graph of datasets is reduced/compressed; para. 71: fig. 5 b depicts a second process for identifying dependencies among components. This process focuses on identifying all unique nodes);
classifying, based on the determining, the first transaction as an abnormal transaction (para. 12: receiving dependency data structures at a manager, the dependency data structures include identifiers of components of the application which execute in different threads of a transaction, and an indication of caller-callee relationships among the components, (b) preparing graph data based on the dependency data structures, the graph data is based on the identifiers of the components and the indication of the caller-callee relationships; para. 36: receives information from the probes and may communicate the information to another process, such as at the manager 120, or process the information locally, such as to determine whether the information indicates an abnormal condition. For example, the information from the probes may indicate start and stop times of a transaction or other execution flow, or of individual components within a transaction/execution flow).
Krishnamurthy et al. teaches nodes’ names but does not explicitly teach nodes’ labels.
Covell teaches at col. 8:23-45: the cluster labeling module 240 reassigns labels to clusters such that the new mapping has an improved aggregate strength of association between exemplars of each cluster and the mapped label compared to the previous mapping. The process of determining a mapping that maximizes the overall strength of association between clusters and labels can be solved using a combinatorial optimization algorithm. In an embodiment, the clusters and labels are represented as nodes of a graph. An edge connecting a cluster to a label is associated with a weight representing the strength of association between the cluster and the label. The graph between the clusters and labels so obtained is a weighted bipartite graph in which all the edges are from the set of nodes representing the clusters to the set of nodes representing the labels. The bipartite graph representation can be used to determine a mapping that optimizes the strength of association between clusters and labels using the Hungarian-matching algorithm that performs maximum weight matching of the bipartite graph connecting clusters to labels. The maximum weight matching algorithm swaps two edges of the bipartite graph if the swapping improves the aggregate strength of association between clusters and labels for the bipartite graph.
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy et al. and with the clustering/grouping of nodes and labels of nodes of Covell in order to effectively analyze/compare nodes with the available clusters for classifying substructures of nodes or datasets.
Krishnamurthy and Covell do not teach determining that a particular key corresponding to a first transaction of the plurality of transactions is excluded from the hash map, wherein the particular key is different from the first key; classifying, based on the determining, the first transaction as a fraudulent transaction.
Fu teaches
determining that a particular key corresponding to a first transaction of the plurality of transactions is excluded from hash map, wherein the particular key is different from the first key; classifying, based on the determining, the first transaction as a fraudulent transaction (para. 13: the machine learning models may be classification models, such as for determining whether or not a transaction is fraudulent, or whether or not an attempt to log in to a secure data server is legitimate; para. 74-75: the configuration database may be a NoSQL database, such as Redis. Redis is a key-value database, and can store data types other than (and including) strings. Table 410 is a table with some key-value pairs. One key may be product_model_config, which maps to a hash map of other information. Another key in the table 410 is ETDC_AppNames, which maps to a table 420 that links to information about the applications, like App1 which has information about the application. Table 410 may also include a hashmap containing the configuration information. The key of table 410 have a key for a particular machine learning model, such as product_model_config. The value of table 410 may be another table 430. Table 430 may be a hashmap. One element of table 430 may be a list of user identifiers (e.g., PANs) in table 440. Thus, the transaction with key, value pairs that are not recorded in the hashmap is a fraudulent transaction.)
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy and Covell with classifying the first transaction as fraudulent transaction as taught by Fu in order to effectively prevent unauthorized/bad transactions.
Krishnamurthy and Covell do not teach the hash map including a substructure hash map and an overlapping hash map and substructure hash map or from the overlapping.
Algotar et al. teaches
the hash map including a substructure hash map and an overlapping hash map and substructure hash map or from the overlapping; determining that a particular key is excluded from the substructure hash map or from the overlapping hash
map (para. 19, 25-28: user actions stored in node 110 and node 112 can be the most recent two actions performed by a user and may be submitted to make a prediction of a next user action or product a user may want. Using an internal prediction, the first step would involves searching node 110 as value on the subgraph of the multimap for internal prediction 200 using recursive backtracking to find the key for its value and storing these values in a first hashmap data structure in memory referred to as hashmap A. After the backtracking for node 110 is complete, the resulting hashmap A for user activity node 110 is {108, 106, 104, 102}/substructure hash map. At the end, the resulting hashmap for the user activity represented by node 112 is {104, 102}. This process of generating hashmaps can be done for any number of user actions corresponding to nodes on the subgraph of the multimap for internal prediction 200. The intersecting of two hashmaps results in an overlap hashmap which includes values common in both hashmap A and hashmap B. In this example, the overlap hashmap, hashmap C, is {104, 102} as both hashmap A and hashmap B contained these values; para. 22, 56: example 2 includes the method of example 1, including or excluding optional features. In this example, the multimap prediction is an internal prediction made by intersecting multiple attached key hashmaps each generated by identifying attached keys through backtracking nodes for each user action input received).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy and Covell with hash map including substructure and overlapping as taught by Algotar in order to effectively allow a user or application to access to a prediction made through use of a multimap based on stored user input – See Algotar, para. 1.
As per claims 2, 12, 17, Krishnamurthy et al. teaches
wherein the generating the one or more descriptions comprises: selecting a first node that is assigned a highest-ordered label; identifying a next node directly connected to the first node, wherein the next node is a) assigned a second-ordered label having a same tier as the highest-ordered label or a next tier lower than the highest-ordered label, or b) connected directly with a highest number of nearby nodes (para. 90: typically, a selected portion of the graph data is accessed at a time based on the user's commands; para. 94: the tree region 804 identifies a hierarchy of nodes which can be expanded and collapsed to see additional nodes or fewer nodes, respectively, and to control the graph data which is accessed for display in the region 806. "Super Domain" is the top node/highest-ordered label. A sub-node is "Axis Agent" which includes a number of further sub-nodes: "Manager Host," which is the host on which the manager process runs, "Java Version," "Launch Time," "ProcessID," and "Virtual Machine);
identifying a final node, wherein the final node has no nearby node which has not been accessed, wherein the one or more descriptions are generated based on a sequence of the first node, the next node, and the final node (para. 12: accessing the graph data based on the user command, and ( e) providing a graphical user interface using the accessed portion of the graph data, where the graphical user interface depicts caller-callee relationships among the at least a portion of the components, and names of services of the application which are represented by the at least a portion of the components; para. 39: an example sequence of components invoked in a transaction; para. 46: selected components, such as the boundary components of transactions, e.g., components which execute at the beginning and end of a transaction, may be instrumented).
Krishnamurthy and Covell do not explicitly teach from the sequence of nodes.
Algotar et al. teaches said limitation at figs. 2-4: sequences of nodes; para. 25-28: user actions stored in node 110 and node 112 can be the most recent two actions performed by a user and may be submitted to make a prediction of a next user action or product a user may want. Using an internal prediction, the first step would involves searching node 110 as value on the subgraph of the multimap for internal prediction 200 using recursive backtracking to find the key for its value and storing these values in a first hashmap data structure in memory referred to as hashmap A. After the backtracking for node 110 is complete, the resulting hashmap A for user activity node 110 is {108, 106, 104, 102}/substructure hash map. At the end, the resulting hashmap for the user activity represented by node 112 is {104, 102}. This process of generating hashmaps can be done for any number of user actions corresponding to nodes on the subgraph of the multimap for internal prediction 200. The intersecting of two hashmaps results in an overlap hashmap which includes values common in both hashmap A and hashmap B. In this example, the overlap hashmap, hashmap C, is {104, 102} as both hashmap A and hashmap B contained these values).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy and Covell with hash map including substructure and overlapping as taught by Algotar in order to effectively allow a user or application to access to a prediction made through use of a multimap based on stored user input – See Algotar, para. 1.
As per claims 3, 13, 18, Krishnamurthy et al. teaches
wherein the next node comprises a plurality of next nodes each sequentially connected based on respective labels of the plurality of next nodes (para. 52: a dependency data structure may represent a set of two or more nodes with a transition or edge between them, or just a single node. The transitions can be across agents, e.g., across JVMs. Dependency data structures can be detected as part of a single agent transaction; figs. 7a-7b: user provides command via user interface, access graph data based on command, create graph using graph data to show dependencies).
As per claims 4, 14, Krishnamurthy et al. teaches
wherein the generating the hash map further comprises: for each instance of the plurality of instances: generating a second key and a second value for an instance based on a second hash for the instance, wherein the second key for the instance comprises a second description of the instance, and wherein the second value for the instance comprises a second identifier of the instance (para. 55: string representation and the dependency data structure itself are stored as key-value pairs in a hash map. When new dependencies are detected, we check for a redundancy with preexisting string entries in the hash map, and if such a redundancy/match is detected, we do not add this newly created dependency data structure to the hash map. Since we do not store the redundant dependency data structures, we potentially could lose some transaction granularity. For example, we might not be able to distinguish between transactions A----;,B and A----;,8----;,C, where 8----;,C is a cross-JVM transaction. To avoid losing this important information, we can keep the correlation data, CorrID and Seqid, of the redundant traces.)
As per claim 5, Krishnamurthy et al. teaches
wherein the generating the hash map further comprises: identifying a second instance from the one or more descriptions, wherein the second instance comprises two or more second nodes connected via at least one second edge, (para. 9: if the matching dependency is not associated with the second information, adding the second information to a dependency data structure of the matching dependency in the store, and (e) communicating dependency data structures in the store to the manager for use in providing a graphical user interface which depicts dependencies among components of the application; para. 45).
generating a third key and a third value based on a third hash for the second instance, wherein the third key for the second instance comprises a third description of the second instance, and wherein the third value for the second instance comprises a third identifier of the second instance (para. 62: or fewer nodes, respectively, and to control the graph data which is accessed for display in the region; para. 66-68: when the first entry is made, no other dependencies involving C have yet been noted by agent C. When a subsequent, second invocation of C occurs, a second entry is made in the dependency data structure which adds the correlation data Seqid=1:1 and Corrid=GUID3, without creating a new dependency data structure).
Krishnamurthy and Covell do not explicitly teach and wherein the two or more second nodes share a common node.
Algotar et al. teaches said limitation at para. 25-28: user actions stored in node 110 and node 112 can be the most recent two actions performed by a user and may be submitted to make a prediction of a next user action or product a user may want. Using an internal prediction, the first step would involves searching node 110 as value on the subgraph of the multimap for internal prediction 200 using recursive backtracking to find the key for its value and storing these values in a first hashmap data structure in memory referred to as hashmap A. After the backtracking for node 110 is complete, the resulting hashmap A for user activity node 110 is {108, 106, 104, 102}/substructure hash map. At the end, the resulting hashmap for the user activity represented by node 112 is {104, 102}. This process of generating hashmaps can be done for any number of user actions corresponding to nodes on the subgraph of the multimap for internal prediction 200. The intersecting of two hashmaps results in an overlap hashmap which includes values common in both hashmap A and hashmap B. In this example, the overlap hashmap, hashmap C, is {104, 102} as both hashmap A and hashmap B contained these values).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy and Covell with hash map including substructure and overlapping as taught by Algotar in order to effectively allow a user or application to access to a prediction made through use of a multimap based on stored user input – See Algotar, para. 1.
As per claim 6, Krishnamurthy teaches at para. 54: identify and remove redundant traces. Krishnamurthy and Covell do not explicitly teach claim 6.
Algotar et al. teaches
wherein: the substructure hash map is usable to consolidate repetitive substructures in a plurality of substructures associated with the plurality of instances (para. 11: provide highly accurate recommendation by combining both the past activities of individuals and the features of the objects or ideas to be predicted; para. 22: each node in the multimap and the subgraph of the multimap graph can store actions and can be connected by an edge to a second node, a value node, with another action or keyword that can be returned as a prediction. This subgraph of the multimap graph can be much smaller in size as it has been matched from user action and keywords from only a limited time frame. This subgraph of the multimap graph can be sent or downloaded by a device to make the prediction on a repeating basis.
the overlapping hash map is usable to identify a group of common edges in the plurality of substructures (para. 25-28: user actions stored in node 110 and node 112 can be the most recent two actions performed by a user and may be submitted to make a prediction of a next user action or product a user may want. Using an internal prediction, the first step would involves searching node 110 as value on the subgraph of the multimap for internal prediction 200 using recursive backtracking to find the key for its value and storing these values in a first hashmap data structure in memory referred to as hashmap A. After the backtracking for node 110 is complete, the resulting hashmap A for user activity node 110 is {108, 106, 104, 102}/substructure hash map. At the end, the resulting hashmap for the user activity represented by node 112 is {104, 102}. This process of generating hashmaps can be done for any number of user actions corresponding to nodes on the subgraph of the multimap for internal prediction 200. The intersecting of two hashmaps results in an overlap hashmap which includes values common in both hashmap A and hashmap B. In this example, the overlap hashmap, hashmap C, is {104, 102} as both hashmap A and hashmap B contained these values).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy and Covell with hash map including substructure and overlapping as taught by Algotar in order to effectively allow a user or application to access to a prediction made through use of a multimap based on stored user input – See Algotar, para. 1.
As per claim 7, Krishnamurthy et al. teaches
wherein the generating the hash map further comprises grouping instances which have a highest number of common edges, and wherein the grouped instances share the first key and the first value (para. 10: if there is the matching dependency in the store, adding the sequence identifier to a dependency data structure of the matching dependency in the store; para. 55: to remove these redundant traces, we can make an equivalent string representation of the dependency data structure using a concatenation of the two boundary component names (e.g., A->B becomes string AB). This string representation and the dependency data structure itself are stored as key-value pairs in a hash map; para. 62: or fewer nodes, respectively, and to control the graph data which is accessed for display in the region; para. 94: "Super Domain" is the top node. A sub-node is "Axis Agent" which includes a number of further sub-nodes: "Manager Host," which is the host on which the manager process runs, "Java Version," "Launch Time," "ProcessID," and "Virtual Machine.")
As per claims 8, 15, Krishnamurthy et al. does not explicitly teach said claims.
Covell teaches
wherein the operations further comprise: setting at least one parameter for classifying the first transaction, wherein the at least one parameter comprises at least one of a complexity of the first transaction or a number of patterns per iteration to classify the first transaction (col. 9:40-48: the cluster labeling module 240 repeats 360 the overall process to generate multiple prediction models. The process can be repeated by varying various parameters associated with the process, for example, by using different input sets of exemplars for determining the initial set of clusters, by using a different machine learning technique, or by using different sets of feature vectors for representing the exemplars);
generating a plurality of background parameters based on the at least one parameter for classifying the first transaction, wherein the plurality of background parameters comprise at least one of a number of iterations, a number of initial patterns, or a maximum number of iterations extended from the at least one of a complexity of the first transaction or a number of patterns per iteration to classify the first transaction (col. 5:62-65: in general, k binary classifiers can be used to divide the input set of exemplars into 2.sup.k clusters, each cluster representing exemplars with a particular combination of characteristics. The binary values corresponding to a cluster can be used to represent a label corresponding to the cluster; col. 8:23-63: the reassignment of labels of clusters is repeated multiple times to improve the aggregate measure of strength of association between the exemplars of the clusters and the assigned labels. The process of reassignment can be repeated until the incremental improvement in the aggregate measure of strength of association between two subsequent reassignments is below a threshold level. Alternatively, the process of reassignment can be repeated for a predetermined number of iterations; col. 9:42-48).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy et al. and with the setting at least one parameter for classifying an operation/transaction of Covell in order to effectively matching node label with the available clusters for classifying substructures of nodes or datasets.
Algotar et al. also teaches at para. 19: the multimap data structure allows the addition of values or keys to a particular node without disrupting the rest of the multimap data structure. The ability to add new products, user actions, and keywords through the addition of new key nodes or value nodes allows the use of this data structure without remaking from scratch the graph for each update in these parameters.
As per claims 9, 19, Krishnamurthy et al. teaches
wherein each dataset of the group of datasets comprises two nodes connected via a respective edge, and wherein a structure of each dataset from the group of datasets is a bipartite graph comprising two disjoint sets of nodes connected via edges (para. 48: Corrld is used to determine which dependencies are related to one another, and the other correlation data determine the direction of the edge (e.g., A2[Wingdings font/0xE0]B not B->A2); para. 52: a dependency data structure may represent a set of two or more nodes with a transition or edge between them, or just a single node. The transitions can be across agents, e.g., across JVMs. Dependency data structures can be detected as part of a single agent transaction).
As per claims 10, 20, Krishnamurthy et al. teaches
wherein the operations further comprise: identifying a neighborhood node for a starting node in the graph as an initial instance, wherein the neighborhood node is at least two edges apart from the starting node (fig. 9a, nodes 912 and 916: neighbor/adjacent nodes which is two edges apart from the node 908 is the starting node in the graph as an initial instance);
generating a fourth key and a fourth value based on a fourth hash for the initial instance, wherein the fourth key comprises the starting node, and wherein the fourth value comprises the neighborhood node connected to the starting node (para. 45-46: components B (304), C (306) and E (308), are at second, third and fourth layers of the call stack, respectively. After A begins to execute, at the start of a transaction, B is called. After B begins to execute, C is called. After C begins to execute, E is called. After E, C and B successively finish
executing, component A and the transaction, finish executing. Selected components, such as the boundary components of transactions, e.g. components which execute at the beginningand end of a transaction, may be instrumented);
Krishnamurthy et al. does not explicitly teach bipartite graph.
Covell teaches said limitation at col. 8:23-45: the cluster labeling module 240 reassigns labels to clusters such that the new mapping has an improved aggregate strength of association between exemplars of each cluster and the mapped label compared to the previous mapping. The process of determining a mapping that maximizes the overall strength of association between clusters and labels can be solved using a combinatorial optimization algorithm. In an embodiment, the clusters and labels are represented as nodes of a graph. An edge connecting a cluster to a label is associated with a weight representing the strength of association between the cluster and the label. The graph between the clusters and labels so obtained is a weighted bipartite graph in which all the edges are from the set of nodes representing the clusters to the set of nodes representing the labels. The bipartite graph representation can be used to determine a mapping that optimizes the strength of association between clusters and labels using the Hungarian-matching algorithm that performs maximum weight matching of the bipartite graph connecting clusters to labels. The maximum weight matching algorithm swaps two edges of the bipartite graph if the swapping improves the aggregate strength of association between clusters and labels for the bipartite graph).
Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Krishnamurthy et al. and with the clustering/grouping of nodes and labels of nodes of Covell in order to effectively analyze/compare nodes with the available clusters for classifying substructures of nodes or datasets.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chao et al. (US 2010086017) teaches in fig. 15: bipartite graph.
Majumdar teaches at para. 61: combine an iterative contraction process with a function computed on the graphs that is derived from a canonical method to partition out locales of a larger graph into representative sub-maps. The process will contract the graphs (i.e. submaps) and compute a locality-based index for the sub-maps; para. 73, 146). Joglekar (US 20210103937) teaches at para. 128-129: specific transactions as fraudulent.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH BLACK whose telephone number is (571)272-4106. The examiner can normally be reached 9AM-5PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LINH BLACK/Examiner, Art Unit 2163 3/12/2026
/TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163