Prosecution Insights
Last updated: April 19, 2026
Application No. 18/334,718

METHOD AND SYSTEM FOR EEG MOTOR IMAGERY CLASSIFICATION

Non-Final OA §103
Filed
Jun 14, 2023
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Tata Consultancy Services Limited
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-9, and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Demir, et al., Non-Patent Literature “EEG-GNN: Graph Neural Networks for Classification of Electroencephalogram (EEG) Signals” (“Demir”) in view of Jiang, et al., Non-Patent Literature “CensNet: Convolution with Edge-Node Switching in Graph Neural Networks” (“Jiang”) and further in view of Gong, et al., Non-Patent Literature “A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks” (“Gong”). Regarding claim 1, Demir teaches: A processor implemented method comprising: receiving, via one or more hardware processors, (Demir, pg. 7 col. 2, “on NVIDIA Tesla K80 12GB GPU [A processor implemented method comprising: receiving, via one or more hardware processors,]”). one or more Electroencephalogram (EEG) signals and corresponding ground truth labels comprising a ground truth graph label, a ground truth edge label, and a ground truth node label, wherein each of the one or more EEG signals comprise a plurality of channels; (Demir, abstract, “Convolutional neural networks (CNN) have been frequently used to extract subject-invariant features from electroencephalogram (EEG) for classification tasks [one or more Electroencephalogram (EEG) signals and corresponding ground truth labels]… Furthermore, we develop various graph neural network (GNN) models that project electrodes onto the nodes of a graph, where the node features are represented as EEG channel samples collected over a trial [wherein each of the one or more EEG signals comprise a plurality of channels;]”, and Demir, pg. 5 Fig. 2, “Following (i), we aggregate the node representations from the final iteration of graph convolution via READOUT function to learn the representation vector of the entire graph. Then, the graph representation vector is classified by a multi-layer perceptron (MLP) using softmax activation at the output layer; classifying the whole graph is interpreted as having ground truth labels for nodes and edges of a graph (i.e. comprising a ground truth graph label, a ground truth edge label, and a ground truth node label,).”). extracting, via the one or more hardware processors, a plurality of temporal embeddings corresponding to each of the plurality of channels of each of the one or more EEG signals using a temporal feature extractor; (Demir, pg. 1 col. 2, “EEG-GNN properly maps the network of the brain as a graph, where each electrode used to collect EEG data according to intl. 10-5 system represents a node in the graph and time samples acquired from an electrode corresponds to that node’s feature vector [extracting, via the one or more hardware processors, a plurality of temporal embeddings…using a temporal feature extractor].”, and Demir, abstract, “Furthermore, we develop various graph neural network (GNN) models that project electrodes onto the nodes of a graph, where the node features are represented as EEG channel samples collected over a trial [corresponding to each of the plurality of channels of each of the one or more EEG signals]”). constructing, via the one or more hardware processors, one or more graphs corresponding to each of the one or more EEG signals, wherein each of the one or more graphs comprise a plurality of nodes corresponding to the plurality of channels (Demir, pg. 1 col. 2, “EEG-GNN properly maps the network of the brain as a graph, where each electrode used to collect EEG data according to intl. 10-5 system represents a node in the graph and time samples acquired from an electrode corresponds to that node’s feature vector [constructing, via the one or more hardware processors, one or more graphs corresponding to each of the one or more EEG signals, wherein each of the one or more graphs comprise a plurality of nodes corresponding to the plurality of channels].”). and a weighted adjacency matrix defining connectivity between each pair of nodes among the plurality of nodes, and wherein each of the plurality of nodes is associated with the plurality of temporal embeddings of the corresponding channel; (Demir, pg. 1 col. 2, “Adjacency matrix of this graph can be constructed flexibly, e.g., i) every pair of nodes is connected by an unweighted edge, ii) every pair of nodes is connected by an edge weighted by the functional neural connectivity factor, which is the Pearson correlation coefficient between the feature vectors of the two nodes [and a weighted adjacency matrix defining connectivity between each pair of nodes among the plurality of nodes, and wherein each of the plurality of nodes is associated with the plurality of temporal embeddings of the corresponding channel;]”). and iteratively training, via the one or more hardware processors, a Graph Neural Network (GNN), the weighted adjacency matrix, a graph classifier,…for a plurality of pre-defined number of iterations for classifying the one or more EEG signals by: (Demir, pg. 7 col. 2, “We use PyTorch Geometric v1.8.0 [13] to implement GNN variants. All models were trained [and iteratively training, via the one or more hardware processors, a Graph Neural Network (GNN), the weighted adjacency matrix,] with a minibatch size of 256 for 400 epochs on NVIDIA Tesla K80 12GB GPU, and optimized by Adam with an initial learning rate of 0.001, which decays into half every 50 epochs. EEG trials from all subjects were shuffled, first 80% of EEG trials was used for training, and the last 20% was used for validation. If there was an improvement in classification accuracy on the validation dataset, model checkpoints were saved […for a plurality of pre-defined number of iterations for classifying the one or more EEG signals by:].”, and Demir, pg. 5 Fig. 2, “Following (i), we aggregate the node representations from the final iteration of graph convolution via READOUT function to learn the representation vector of the entire graph. Then, the graph representation vector is classified by a multi-layer perceptron (MLP) [a graph classifier,] using softmax activation at the output layer.”). generating a plurality of node embeddings corresponding to each of the one or more EEG signals by the GNN using the one or more graphs; (Demir, pg. 5 Fig. 2, “Optionally, following(ii), we transform the output from the final iteration of graph convolution with ReLU non-linearity, and learn an embedding for each EEG channel [generating a plurality of node embeddings corresponding to each of the one or more EEG signals by the GNN using the one or more graphs;].”). classifying each of the one or more EEG signals based on the corresponding plurality of node embeddings using the graph classifier,…to obtain a graph label, a node label, and an edge label, (Demir, pg. 5 Fig. 2, “Following (i), we aggregate the node representations from the final iteration of graph convolution via READOUT function to learn the representation vector of the entire graph. Then, the graph representation vector is classified by a multi-layer perceptron (MLP) using softmax activation at the output layer; classifying the whole graph is interpreted as having graph, node, and edge labels (i.e. classifying each of the one or more EEG signals based on the corresponding plurality of node embeddings using the graph classifier,…to obtain a graph label, a node label, and an edge label,)). wherein the graph label provides motor imagery classification, the node label provides quality of the EEG signal, and the edge label determines affinity between a pair of nodes among the plurality of nodes; (Demir, pg. 7-8, “Each trial has 128 discretized time samples, and is associated with one of the 4 labels: emotion elicitation, resting-state, or motor imagery/execution task [wherein the graph label provides motor imagery classification,].”, and Demir, pg. 1 col. 2, “EEG-GNN properly maps the network of the brain as a graph, where each electrode used to collect EEG data according to intl. 10-5 system represents a node in the graph and time samples acquired from an electrode corresponds to that node’s feature vector; the feature vector is based on the collected EEG data (i.e. the node label provides quality of the EEG signal,).”, and Demir, pg. 1 col. 2, “every pair of nodes is connected by an edge weighted by the functional neural connectivity factor, which is the Pearson correlation coefficient between the feature vectors of the two nodes [and the edge label determines affinity between a pair of nodes among the plurality of nodes;]”). and updating the GNN, the weighted adjacency matrix, the graph classifier,…based on a total loss…of a graph classification loss,…. (Demir, pg. 7 col. 2, “All models were trained with a minibatch size of 256 for 400 epochs [and updating the GNN, the weighted adjacency matrix, the graph classifier,] on NVIDIA Tesla K80 12GB GPU, and optimized by Adam with an initial learning rate of 0.001, which decays into half every 50 epochs. EEG trials from all subjects were shuffled, first 80% of EEG trials was used for training, and the last 20% was used for validation. If there was an improvement in classification accuracy on the validation dataset, model checkpoints were saved [based on a total loss…of a graph classification loss,….].”). While Demir teaches a system that uses a graph neural network for classifying EEG signals, Demir does not explicitly teach: iteratively training…a node classifier, and an edge classifier… classifying…based on…the node classifier, and the edge classifier… updating…the node classifier, and the edge classifier…based on…a node classification loss, and an edge classification loss total loss obtained as a sum of losses Jiang teaches: iteratively training…a node classifier, and an edge classifier… (Jiang, pg. 2656 col. 2, “we propose a novel convolution with edge node switching network (CensNet) for learning node and edge embeddings [a node classifier, and an edge classifier].”, and Jiang, pg. 2656-2657, “With the help of node and edge features, CensNet employs two forward-pass feature propagation rules on G and L(G) to alternatively update the node and edge embeddings [iteratively training…a node classifier, and an edge classifier…].”). classifying…based on…the node classifier, and the edge classifier… (Jiang, pg. 2656 col. 2, “we propose a novel convolution with edge node switching network (CensNet) for learning node and edge embeddings [classifying…based on…the node classifier, and the edge classifier…].”). updating…the node classifier, and the edge classifier…based on…a node classification loss, and an edge classification loss (Jiang, pg. 2659 col. 1, “The designs of the output layer, as well as the loss function, are task dependent. For node or edge classification tasks, we may apply the sigmoid function to the final hidden node or edge layers [updating…the node classifier, and the edge classifier…based on…a node classification loss, and an edge classification loss]”). Demir and Jiang are both in the same field of endeavor (i.e. graph neural networks). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Demir and Jiang to teach the above limitation(s). The motivation for doing so is that classifying edges and nodes benefits overall graph classification by providing more contextual information (cf. Jiang, pg. 2656 col. 2, “We justify the motivation of jointly learning node and edge embeddings from the following two aspects. First, it is clear that edge and nodes always provide complementary feature information, which will be helpful for graph embedding. Second, learning edge embeddings is essential for edge-relevant tasks, such as edge classification and regression.”). While Demir in view of Jiang teaches a system that uses a graph neural network, node classifier, and edge classifier to classify EEG signals, the combination does not explicitly teach: total loss obtained as a sum of losses Gong teaches total loss obtained as a sum of losses (Gong, pg. 141628 col. 2, “In this paper, we limit our experiments to the dynamic task weighting approaches in MTL. The most straightforward way is uniform weighting: the task-specific losses are simply added together to produce a single scalar loss value [total loss obtained as a sum of losses].”). Demir, in view of Jiang, and Gong are both in the same field of endeavor (i.e. multiple task learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Demir, in view of Jiang, and Gong to teach the above limitation(s). The motivation for doing so is that adding uniform weighted losses improves combining different tasks’ loss functions together (cf. Gong, pg. 141628 col. 2, “In MTL, one tries to optimize multiple loss functions, necessitating a way to combine these loss functions into a single value”). Regarding claim 2, Demir in view of Jiang and Gong teaches the method of claim 1. The combination also teaches the GNN, the graph classifier, the node classifier, and the edge classifier as seen in claim 1. Demir further teaches are used to classify a test EEG signal during inference based on a graph constructed using the weighted adjacency matrix and a plurality of temporal embeddings corresponding to each of a plurality of channels of the test EEG signal. (Demir, pg. 8 col. 2, “In this paper, we presented several GNN models along with various regularization strategies to model the functional neural connectivity between EEG electrode sites, and demonstrated GNN models outperform CNN models of different size and inference strategies in classification tasks across ErrP and RSVP datasets [are used to classify a test EEG signal during inference].”, and Demir, pg. 1 col. 2, “Adjacency matrix of this graph can be constructed flexibly, e.g., i) every pair of nodes is connected by an unweighted edge, ii) every pair of nodes is connected by an edge weighted by the functional neural connectivity factor, which is the Pearson correlation coefficient between the feature vectors of the two nodes [based on a graph constructed using the weighted adjacency matrix and a plurality of temporal embeddings corresponding to each of a plurality of channels of the test EEG signal.]”). Regarding claim 3, Demir in view of Jiang and Gong teaches the method of claim 1. Demir further teaches wherein the plurality of node embeddings are vectorized before feeding into the graph classifier. (Demir, pg. 5 Fig. 2, “Following (i), we aggregate the node representations from the final iteration of graph convolution via READOUT function to learn the representation vector of the entire graph. Then, the graph representation vector is classified by a multi-layer perceptron (MLP) using softmax activation at the output layer [wherein the plurality of node embeddings are vectorized before feeding into the graph classifier.].”). Regarding claim 4, Demir in view of Jiang and Gong teaches the method of claim 1. Jiang further teaches wherein the plurality of node embeddings are concatenated before feeding into the edge classifier. (Jiang, pg. 2658 col. 1, “In the edge layer, we combine the updated node embedding with the line graph to update the edge embedding [wherein the plurality of node embeddings are concatenated before feeding into the edge classifier.].”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Jiang with the teachings of Demir and Gong for the same reasons disclosed in claim 1. Regarding claim 6, the claim is similar to claim 1 and rejected under the same rationales. Demir teaches the additional limitations A system comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: (Demir, pg. 7 col. 2, “on NVIDIA Tesla K80 12GB GPU [A system comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to:]”). Regarding claims 7-9, the claims are similar to claims 2-4 and rejected under the same rationales. Regarding claim 11, the claim is similar to claim 1 and rejected under the same rationales. Demir teaches the additional limitations One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause: (Demir, pg. 7 col. 2, “on NVIDIA Tesla K80 12GB GPU [One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause:]”). Regarding claims 12-14, the claims are similar to claims 2-4 and rejected under the same rationales. Allowable Subject Matter Claims 5, 10, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for indication of allowable subject matter: Regarding claim 5, Below are the closest cited references, each of which disclose various aspects of the claimed invention: Li, et al., “MutualGraphNet: A novel model for motor imagery classification” discloses a system that performs classification on motor imagery EEG signals by using an adjacency matrix merged with a spatial temporal graph convolutional network. While Li teaches a GNN, adjacency matrix, and graph classifier, Li does not explicitly teach the node classifier or edge classifier and thus also does not teach the combined total loss function comprised of five specific gradients to update the corresponding five elements: the GNN, the adjacency matrix, the graph classifier, the node classifier, and the edge classifier. Cai, et al., “Motor Imagery Decoding in the Presence of Distraction Using Graph Sequence Neural Networks” discloses a system that performs EEG signal decoding using a graph sequence neural network with an adjacency matrix and a node domain attention selection process. Cai also teaches a combined loss function that sums up the loss of a Kullback-Leibler loss and a domain loss. While Cai teaches a system for classifying EEG signals using a GNN. Adjacency matrix, and combined loss functions, Cai does not explicitly teach the combined total loss function comprised of five specific gradients to update the corresponding five elements: the GNN, the adjacency matrix, the graph classifier, the node classifier, and the edge classifier. While the above prior arts disclose the aforementioned concepts, however, none of the prior arts, individually or in reasonable combination, discloses all the limitations in the manner recited in claim 5. Specifically, the claim requires calculating five separate gradient values, within a total loss sum, which correspond to a GNN, adjacency matrix, graph classifier, node classifier, and edge classifier. Additionally, claim 5 also requires that the GNN, adjacency matrix, graph classifier, node classifier, and edge classifier are updated with their respective gradients. While the references cited above mention aspects of updating GNN, adjacency matrices, graph classifiers, and combined losses, they do not recite calculating five separate gradient values, within a total loss sum, which correspond to a GNN, adjacency matrix, graph classifier, node classifier, and edge classifier. Therefore, claim 5 is allowable over the prior art. Regarding claim 10, the claim is similar to claim 5 and allowable over the prior art for the same rationales. Regarding claim 15, the claim is similar to claim 5 and allowable over the prior art for the same rationales. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month