Prosecution Insights
Last updated: April 19, 2026
Application No. 17/657,625

CAUSAL EVENT PREDICTION FOR EVENTS

Non-Final OA §103
Filed
Mar 31, 2022
Examiner
NAULT, VICTOR ADELARD
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
BMC Software, Inc.
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +83% interview lift
Without
With
+83.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/02/2026 has been entered. Remarks This Office Action is responsive to Applicants' Amendment filed on March 02, 2026, in which claims 1, 6, 11, and 16 are amended. No claims have been newly added or cancelled. Claims 1, 3-6, 8-11, 13-16, and 18-20 are currently pending. Response to Arguments With regards to the rejections of claims 1, 3, 4, 6, 8-10, 11, 13, 14, 16, and 18-20 as rejected under 35 U.S.C. 103 due to being unpatentable over Song et al. “KatGCN: Knowledge-Aware Attention based Temporal Graph Convolutional Network for Multi-Event Prediction”. in view of Hu et al. (Chinese Patent Application Publication No. 110866190), further in view of Morris et al. (U.S. Patent Application Publication No. 2019/0378010), further in view of Yang et al. (U.S. Patent Application Publication No. 2022/0076101), Applicant’s argument that the claims as amended overcome the rejections are persuasive, however the arguments are moot in view of a new grounds of rejection, as presented below. Examiner additionally notes that no rejection in the current office action relies upon Yang, and that while Hu is relied upon in the rejections of some claims, Hu is not relied upon to teach the newly amended “spatiotemporal embedding layers”. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 11, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. “KatGCN: Knowledge-Aware Attention based Temporal Graph Convolutional Network for Multi-Event Prediction”, hereinafter Song, in view of Kapoor et al. “Examining COVID-19 Forecasting using Spatio-Temporal Graph Neural Networks”, hereinafter Kapoor, further in view of Hu et al. (Chinese Patent Application Publication No. 110866190), hereinafter Hu, further in view of Morris et al. (U.S. Patent Application Publication No. 2019/0378010), hereinafter Morris. Regarding claim 1, Song teaches A computer-implemented method for training a graph neural network (GNN) for predicting an edge probability, the method comprising: ((Song Abstract) “we propose a novel Knowledge-aware attention based temporal Graph Convolutional Network (KatGCN) for predicting multiple co-occurring events of different types”, proposing a new machine learning model architecture includes means for training it) receiving, at an input layer of the GNN, an event graph (Song Fig. 2 shows event graphs being preprocessed, then given as into to LSTMs and then to the input layer of an MLP) representing a plurality of nodes and edge probabilities between a plurality of pairs of nodes ((Song Pg. 1) “Each event graph with a timestamp is composed of multiple co-occurring events of different types, including event actors (as nodes) and event types (as edges)”, (Song Pg. 3) “Considering that the event graph is a multi-relation graph, the embedding of relations (edges) cannot be ignored. Motivated by GAT [17], we propose a new knowledge-aware attention mechanism, including entity-aware attention and relation-aware attention, to distinguish the importance of neighboring entities and relations”, broadest reasonable interpretation of an edge probability includes attention related to edges, which measures the importance of an edge in the connection between two nodes) at past times; ((Song Pg. 2) “TE graph is built on a sequence of event graphs in ascending time order…We denoted a set of events at time t as Gt = {(s, r, o)t}. A TE graph can be presented as G = {Gt-k, Gt-k+1, …, Gt}…We transform the task of multievent prediction into a multi-label classification problem to model the occurrence probability of different events at t + 1”, the TE graph represents events at times up to t, and time t + 1 will have events predicted, so the TE graph represents past times) PNG media_image1.png 798 1513 media_image1.png Greyscale generating, using the node embeddings, a new edge as a new edge probability ((Song Pg. 4) “We feed the Xt into a MLP to calculate the probability of different event types”, (Song Pg. 3) “Event graphs contain many edges, which represent event types”) between one pair of nodes from the plurality of pairs of nodes (Song Pg. 1, Fig. 1 shows at least one pair of nodes to have edge probabilities predicted for) at a future time… ((Song Pg. 4) “Through temporal encoding, we have obtained the historical embedding Xt up to time t. Then, we model the probability of multiple co-occurring events in the future timestamp t + 1 based on TE graph”) PNG media_image2.png 545 1078 media_image2.png Greyscale and updating the GNN…based on the new edge probability at the future time and a known edge probability at the future time ((Song Pg. 4) “Next, we adopt the categorical cross-entropy [19] loss: [Equation 12] Where y-hati is the model prediction for event type i”, (Song Pg. 3) “Event graphs contain many edges, which represent event types”, y within the loss function would be the known edge probability, loss functions are used for updating neural networks, as more explicitly taught by Morris) PNG media_image3.png 181 1133 media_image3.png Greyscale Kapoor teaches the following further limitation that Song does not explicitly teach: generating node embeddings for the [event] graph by executing a graph aggregation over spatiotemporal neighbor nodes of the plurality of nodes in the [event] graph within one or more spatiotemporal embedding layers of the GNN; ((Kapoor Pg. 3, Fig. 2 Caption) “A visualization of a 2-hop Skip-Connection model. Multiple layers of spatial aggregations are used on temporal embedding vectors. At each layer, the embedding of the seed-node (represented in blue) is concatenated and propagated up to the next embedding layer”, (Kapoor Pg. 2) “first, messages are propagated along the neighbors; and second, the messages are aggregated to obtain the updated representations”, (Kapoor Pg. 2) “Spatio-temporal graphs are a kind of graph that model connections between nodes as a function of time and space, and have found uses in a wide variety of fields [25]. GNNs have been successfully applied to spatio-temporal traffic graphs”, embedding layers of a GNN that uses spatiotemporal graphs are spatiotemporal embedding layers, aggregating messages that are propagated amongst neighbor nodes corresponds to a graph aggregation over the neighbor nodes, Song teaches a graph of events more explicitly) PNG media_image4.png 462 582 media_image4.png Greyscale At the time of filing, one of ordinary skill in the art would have motivation to combine Song and Kapoor by taking the method for training a graph neural network via predicting an edge probability within an event graph, as taught by Song, and generating node embeddings with an aggregation of spatiotemporal neighbor nodes at a spatiotemporal embedding layer, as taught by Kapoor, as Kapoor teaches: (Kapoor Pg. 2) “The core insight behind graph neural network models is that the transformation of the input node’s signal can be coupled with the propagation of information from a node’s neighbors in order to better inform the future hidden state of the original input”, and aggregating the neighbor nodes, via the embedding layers, accomplishes this. Such a combination would be obvious. Hu teaches the following further limitation that neither Song nor Kapoor explicitly teach: generating, using the node embeddings, a new edge as a new edge probability…using one or more middle layers of the GNN; ((Hu [0067]) “Step 23, in the node embedding layer of the graph neural network model, taking the first node and the second node as target nodes respectively, performing multi-level vector embedding based on the node attribute features of the target nodes and the neighbor node set of the target nodes, thereby obtaining the first higher-order vector corresponding to the first node and the second higher-order vector corresponding to the second node respectively; Step 24, determining the probability that the first node and the second node are connected through the first connecting edge based on the first higher-order vector, the second higher-order vector, and the first edge vector”, a node embedding layer corresponds to a middle layer of the graph neural network) At the time of filing, one of ordinary skill in the art would have motivation to combine Song, Kapoor, and Hu by taking the method for training a graph neural network via predicting an edge probability within an event graph, using node embeddings created by aggregation of spatiotemporal neighbor nodes at a spatiotemporal embedding layer, jointly taught by Song and Kapoor, and using the node embeddings to generate edges at a middle layer, as taught by Hu, as Hu teaches: (Hu [0015]) “The edge embedding layer and the node embedding layer are updated with the goal of maximizing the probability”, that is, that using a layer of a graph neural network to generate the edge probabilities allows for the edge probabilities to be maximized using neural network training techniques. Such a combination would be obvious. Morris teaches the following further limitation that neither Song, nor Kapoor, nor Hu explicitly teaches: and updating the GNN, at an output layer of the GNN, to an updated GNN… ((Morris [0027]) “The method where training the artificial neural network includes providing, to the artificial neural network, data including the one or more graph representations”, (Morris [0057]) “when training the neural network the output of the output layer may be used as a prediction and may be compared with a target value of a training instance to determine an error. The error may be used to update weights in each layer of the neural network…The loss function may be used to determine error when comparing an output value and a target value”, a neural network that uses graph representations as input corresponds to a graph neural network (GNN)) At the time of filing, one of ordinary skill in the art would have motivation to combine Song, Kapoor, Hu, and Morris by taking the method for training a graph neural network via predicting an edge probability within an event graph at a middle layer, using node embeddings created by aggregation of spatiotemporal neighbor nodes at a spatiotemporal embedding layer, taught jointly by Song, Kapoor, and Hu, and subsequently updating the graph neural network at an output layer, as taught by Morris, as it is well-known in the art to use an output layer’s output for updating a neural network, and updating the weights of the output layer in the process, as doing so results in iterative improvement of the neural network’s accuracy with minimal human labor required. Such a combination would be obvious. Regarding claim 3, Song, Kapoor, Hu, and Morris jointly teach The computer-implemented method of claim 1, Song further teaches: wherein the event graph includes topology changes associated with the plurality of nodes in the event graph (Song Pg. 1, Fig. 1, shows the temporal event graph includes changes in the direction and type of the edges between the nodes in the graph, i.e. changes in the topology of the graph) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Song, Kapoor, Hu, and Morris for the parent claim of claim 3, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 4, Song, Kapoor, Hu, and Morris jointly teach The computer-implemented method of claim 1, Song further teaches: wherein receiving the event graph includes processing the event graph in chronological order using the past times ((Song Pg. 1) “As shown in Fig. 1, temporal event graph is a sequence of event graph in ascending time order”, ascending time order is a chronological order, Song Pg. 3, Fig. 2 shows the temporal event graph is temporally encoded (processed) in order from first past time t-k to last past time t) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Song, Kapoor, Hu, and Morris for the parent claim of claim 4, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claims 11, 13, and 14, Claims 11, 13, and 14 recite a computer-readable medium containing instructions for performing the function of the method of claims 1, 3, and 4, respectively. Specifically, claim 11 recites A computer program product for training a graph neural network (GNN) for predicting an edge probability, the computer program product being tangibly embodied on a non-transitory computer-readable medium and including executable code that, when executed, causes a computing device to: [perform the method of claim 1]. Hu recites: (Hu [0048]) “a computer-readable storage medium is provided having a computer program stored thereon, which, when executed in a computer, causes the computer to perform the method of the first aspect”. At the time of filing, one of ordinary skill in the art would have motivation to take the method for training a graph neural network taught jointly by Song, Kapoor, Hu, and Morris and implement it on a medium with code for the method embodied, as taught by Hu, as it is well-known within the art to encode executable code upon computer-readable media for distribution. All other limitations in claims 11, 13, and 14 are substantially the same as those in claim 1, 3, and 4, respectively, therefore the same rationale for rejection applies. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Song, in view of Kapoor, further in view of Hu, further in view of Morris, further in view of Creed et al. (U.S. Patent No. 12,106,217), hereinafter Creed, further in view of Sankar et al. (U.S. Patent Application Publication No. 2021/0326389), hereinafter Sankar. Regarding claim 5, Song, Kapoor, Hu, and Morris jointly teach The computer-implemented method of claim 1, Creed teaches the following further limitations more explicitly than Song, Kapoor, Hu, or Morris: receiving a new [event] graph ((Creed Col. 26, lines 36-40) “The updated GNN model may be used to predict link relationships between entities in the filtered entity-entity relationship graph by inputting to the updated GNN model representations of two or more entities and/or one or more relationships”, broadest reasonable interpretation of receiving an event graph includes inputting a representation of entities and relationships, Song teaches an event graph) processing the new event graph using the updated GNN ((Creed Col. 26, lines 36-40) “The updated GNN model may be used to predict link relationships between entities in the filtered entity-entity relationship graph by inputting to the updated GNN model representations of two or more entities and/or one or more relationships”, broadest reasonable interpretation of a graph includes a relationship of entities and relationships, inputting a graph into an updated GNN for prediction includes processing the graph, Song teaches an event graph) and generating a different new edge for the new [event] graph as a different new edge probability [at a different future time] ((Creed Col. 26, lines 36-42) “The updated GNN model may be used to predict link relationships between entities in the filtered entity-entity relationship graph by inputting to the updated GNN model representations of two or more entities and/or one or more relationships; and receiving an indication of the likelihood of a link relationship existing between the two or more entities ”, a link relationship is an edge, Song teaches an event graph, Creed does not teach prediction at a different future time) At the time of filing, one of ordinary skill in the art would have motivation to combine Song, Kapoor, Hu, Morris, and Creed by taking the method of claim 1 resulting in an updated graph neural network, taught jointly by Song, Kapoor, Hu, and Morris, and using the updated graph neural network to process a new graph, as taught by Creed, as it is well-known in the art that using an increased amount of data to train a neural network results in the neural network generally increasing in accuracy, and using a graph neural network that has been updated with graph data it has already processed would predictably provide this improvement. Such a combination would be obvious. Although Song, Kapoor, Hu, Morris, and Creed jointly teach most of the limitation and generating a different new edge for the new event graph as a different new edge probability at a different future time, they do not teach edge generation at a different future time. However, Sankar teaches: (Sankar [0165]) “In the multi-step scenario, the embeddings can predict links at multiple future time steps {t+1, ..., t+Δ}” At the time of filing, one of ordinary skill in the art would have motivation to combine Song, Kapoor, Hu, Morris, Creed, and Sankar by taking the method of claim 1 resulting in an updated graph neural network, and using the updated graph neural network to process a new graph, taught jointly by Song, Kapoor, Hu, Morris, and Creed, and generating the edges at more than one future time, as taught by Sankar, as doing so would yield the predictable benefit of allowing the model greater flexibility in use than if it could only predict at one future time step. Such a combination would be obvious. Regarding claim 15, Claim 15 recites a computer-readable medium containing code for performing the function of the method of claim 5. All other limitations in claim 15 are substantially the same as those in claim 5, therefore the same rationale for rejection applies. Claims 6, 8-10, 16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Song, in view of Kapoor, further in view of Morris. Regarding claim 6, Song teaches A computer-implemented method for using a graph neural network (GNN) for predicting an edge probability, the method comprising: ((Song Abstract) “we propose a novel Knowledge-aware attention based temporal Graph Convolutional Network (KatGCN) for predicting multiple co-occurring events of different types”, proposing a new machine learning model architecture includes means for using it) receiving, at an input layer of the GNN, a current event graph (Song Fig. 2 shows event graphs being preprocessed, then given as into to LSTMs and then to the input layer of an MLP) representing a plurality of nodes and edge probabilities between a plurality of pairs of nodes ((Song Pg. 1) “Each event graph with a timestamp is composed of multiple co-occurring events of different types, including event actors (as nodes) and event types (as edges)”, Examiner interprets edge probabilities in the current event graph to be the strength of the causal relationship between nodes and edges (Song Pg. 3) “Considering that the event graph is a multi-relation graph, the embedding of relations (edges) cannot be ignored. Motivated by GAT [17], we propose a new knowledge-aware attention mechanism, including entity-aware attention and relation-aware attention, to distinguish the importance of neighboring entities and relations”, broadest reasonable interpretation of an edge probability includes attention related to edges, which measures the importance of an edge in the connection between two nodes) at past times; ((Song Pg. 2) “TE graph is built on a sequence of event graphs in ascending time order…We denoted a set of events at time t as Gt = {(s, r, o)t}. A TE graph can be presented as G = {Gt-k, Gt-k+1, …, Gt}…We transform the task of multievent prediction into a multi-label classification problem to model the occurrence probability of different events at t + 1”, the TE graph represents events at times up to t, and time t + 1 will have events predicted, so the TE graph represents past times) and training the GNN…to generate a new edge as a new edge probability between one pair of nodes from the plurality of pairs of nodes at a future time ((Song Pg. 4) “Next, we adopt the categorical cross-entropy [19] loss: [Equation 12] Where y-hati is the model prediction for event type i”, (Song Pg. 3) “Event graphs contain many edges, which represent event types”, y within the loss function would be the known edge probability, loss functions are used for training neural networks, as more explicitly taught by Morris) from the node embeddings ((Song Pg. 3, Fig. 2 Caption) “Finally we feed the sequence of TE graph embedding into LSTM to capture the temporal dependence, and add a multi-layer perceptron (MLP) to predict the probability of co-occurring events at t + 1”, Song Pg. 3, Fig. 2 shows that the graph embedding includes node embeddings) Kapoor teaches the following further limitation that Song does not explicitly teach: generating node embeddings for the current [event] graph by executing a graph aggregation over spatiotemporal neighbor nodes of the plurality of nodes in the current [event] graph within one or more spatiotemporal embedding layers of the GNN; ((Kapoor Pg. 3, Fig. 2 Caption) “A visualization of a 2-hop Skip-Connection model. Multiple layers of spatial aggregations are used on temporal embedding vectors. At each layer, the embedding of the seed-node (represented in blue) is concatenated and propagated up to the next embedding layer”, (Kapoor Pg. 2) “first, messages are propagated along the neighbors; and second, the messages are aggregated to obtain the updated representations”, (Kapoor Pg. 2) “Spatio-temporal graphs are a kind of graph that model connections between nodes as a function of time and space, and have found uses in a wide variety of fields [25]. GNNs have been successfully applied to spatio-temporal traffic graphs”, embedding layers of a GNN that uses spatiotemporal graphs are spatiotemporal embedding layers, aggregating messages that are propagated amongst neighbor nodes corresponds to a graph aggregation over the neighbor nodes, Song teaches a graph of events more explicitly) At the time of filing, one of ordinary skill in the art would have motivation to combine Song and Kapoor by taking the method for training a graph neural network via predicting an edge probability within an event graph, as taught by Song, and generating node embeddings with an aggregation of spatiotemporal neighbor nodes at a spatiotemporal embedding layer, as taught by Kapoor, as Kapoor teaches: (Kapoor Pg. 2) “The core insight behind graph neural network models is that the transformation of the input node’s signal can be coupled with the propagation of information from a node’s neighbors in order to better inform the future hidden state of the original input”, and aggregating the neighbor nodes, via the embedding layers, accomplishes this. Such a combination would be obvious. Morris teaches the following further limitation that neither Song nor Kapoor explicitly teach: and training the GNN, at an output layer of the GNN ((Morris [0027]) “The method where training the artificial neural network includes providing, to the artificial neural network, data including the one or more graph representations”, (Morris [0057]) “when training the neural network the output of the output layer may be used as a prediction and may be compared with a target value of a training instance to determine an error. The error may be used to update weights in each layer of the neural network…The loss function may be used to determine error when comparing an output value and a target value”, a neural network that uses graph representations as input corresponds to a graph neural network (GNN)) using historical event graphs… ((Morris [0020]) “the system including: a graph module configured to store and update a graph including nodes and edges, where each node corresponds to an entity type, and where each edge represents a relationship between two nodes; a first interface configured to receive (i) historical data and (ii) current event data, where the (i) and (ii) are used to update the graph”, (Morris [0027]) “The system where the historical data is associated with a plurality of transactions between entities of the graph”, training with historical event data associated with entities of a graph corresponds to using historical event graphs for training) At the time of filing, one of ordinary skill in the art would have motivation to combine Song, Kapoor, and Morris by taking the method for training a graph neural network via predicting an edge probability within an event graph, using node embeddings created by aggregation of spatiotemporal neighbor nodes at a spatiotemporal embedding layer, taught jointly by Song and Kapoor, and training the graph neural network at an output layer using historical event graphs, as taught by Morris, as it is well-known in the art to perform training using historical data, including training the weights of a neural network’s output layer in the process, as doing so results in iterative improvement of the neural network’s accuracy with minimal human labor required. Such a combination would be obvious. Regarding claim 8, Song, Kapoor, and Morris jointly teach The computer-implemented method of claim 6, Song further teaches: wherein the current event graph includes topology changes associated with the plurality of nodes in the current event graph (Song Pg. 1, Fig. 1, shows the temporal event graph includes changes in the direction and type of the edges between the nodes in the graph, i.e. changes in the topology of the graph) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Song, Kapoor, and Morris for the parent claim of claim 8, claim 6. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 9, Song, Kapoor, and Morris jointly teach The computer-implemented method of claim 6, Song further teaches: wherein receiving the current event graph includes processing the current event graph in chronological order using the past times ((Song Pg. 1) “As shown in Fig. 1, temporal event graph is a sequence of event graph in ascending time order”, ascending time order is a chronological order, Song Pg. 3, Fig. 2 shows the temporal event graph is temporally encoded (processed) in order from first past time t-k to last past time t) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Song, Kapoor, and Morris for the parent claim of claim 9, claim 6. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 10, Song, Kapoor, and Morris jointly teach The computer-implemented method of claim 6, Morris further teaches: further comprising storing the new edge probability in a database ((Morris [0102]) “The plurality of nodes and the relationships between nodes may be stored in computer memory. For example, each node may be stored, in a first tabular database, as a row with a unique identification value (e.g., a key), whereas each relationship may be stored, in a second tabular database, as a row correlating two unique identifications and comprising a weighting value”) for further training of the trained GNN ((Morris [0020]) “the system including: a graph module configured to store and update a graph including nodes and edges, where each node corresponds to an entity type, and where each edge represents a relationship between two nodes; a first interface configured to receive (i) historical data and (ii) current event data, where the (i) and (ii) are used to update the graph”) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Song, Kapoor, and Morris for the parent claim of claim 10, claim 6. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claims 16 and 18-20, Claims 16 and 18-20 recite a computer-readable medium containing instructions for performing the function of the method of claims 6 and 8-10, respectfully. Specifically, claim 16 recites A computer program product for using a graph neural network (GNN) for predicting an edge probability, the computer program product being tangibly embodied on a non-transitory computer-readable medium and including executable code that, when executed, causes a computing device to: [perform the method of claim 1]. Morris recites: (Morris [0162]) “One or more aspects of the disclosure may be embodied in computer-usable data or computer-executable instructions…The computer-executable instructions may be stored as computer-readable instructions on a computer readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like”. At the time of filing, one of ordinary skill in the art would have motivation to take the method for using a graph neural network taught jointly by Song, Kapoor, and Morris and implement it on a medium with code for the method embodied, as taught by Morris, as it is well-known within the art to encode executable code upon computer-readable media for distribution. All other limitations in claims 16 and 18-20 are substantially the same, or broader as those in claim 6, and 8-10, therefore the same rationale for rejection applies. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Deng “Predicting Social Events using Entity Interaction Graph Sequences” teaches the use of graph convolutional neural networks to predict events as edges within graphs of interacting entities. Chen et al. (U.S. Patent Application Publication No. 2021/0209472) teaches the use of a graph neural network for determining causal relationships between events. Yang et al. (U.S. Patent Application Publication No. 2022/0076101) teaches aggregation of spatio-temporal expressions of a first node, representing a user, within a graph. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR A NAULT whose telephone number is (703) 756-5745. The examiner can normally be reached M - F, 12- 8. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.A.N./Examiner, Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Mar 31, 2022
Application Filed
Mar 11, 2025
Non-Final Rejection — §103
Sep 05, 2025
Interview Requested
Sep 16, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Examiner Interview Summary
Sep 18, 2025
Response Filed
Dec 03, 2025
Final Rejection — §103
Feb 17, 2026
Interview Requested
Mar 02, 2026
Request for Continued Examination
Mar 11, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579429
DEEP LEARNING BASED EMAIL CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566953
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Mar 03, 2026
Patent 12561563
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Feb 24, 2026
Patent 12468939
OBJECT DISCOVERY USING AN AUTOENCODER
2y 5m to grant Granted Nov 11, 2025
Patent 12446600
TWO-STAGE SAMPLING FOR ACCELERATED DEFORMULATION GENERATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+83.3%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month