Prosecution Insights
Last updated: April 19, 2026
Application No. 17/992,565

CLASSIFICATION PROCESSING OF AN ELECTROPHYSIOLOGICAL SIGNAL BASED ON SPATIAL LOCATIONS OF CHANNELS OF THE SIGNAL

Non-Final OA §101§102§103§112
Filed
Nov 22, 2022
Examiner
SHALABY, AHMAD HUSSAM
Art Unit
2187
Tech Center
2100 — Computer Architecture & Software
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
17 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Responsive to communications on 08/23/2024 Claims 1-20 pending Claims 1-20 rejected Priority Application data sheet requesting priority to foreign application CN202110246494.1 for filing date 03/05/2021 has been considered and accepted by the examiner. Information Disclosure Statement IDS received on 03/21/2023 considered and accepted by the examiner. IDS received on 08/23/2024 considered and accepted by the examiner. Drawings Drawings received on 11/22/2022 reviewed and accepted by the examiner. Specification Abstract is less than 150 words and contains no legal or implied phraseology. Abstract is accepted by the examiner. Claim Interpretation The examiner would like to outline how certain terms in the claims are interpreted in light of the specifications under broadest reasonable interpretation: ---------Channel Associations ----------- claims 4-8 Channel Association Feature: par 5: “The channel association feature indicates spatial locations of multiple acquisition channels of the acquisition device, each of the multiple acquisition channels collecting the electrophysiological signal at a respective spatial location.” … par 40: “The channel association feature is feature information for describing associations among acquisition channels.” The specifications does not explain what a “feature information” is, however, under broadest reasonable interpretation for machine learning a “feature” is any measurable variable. In light of the specification par 47: “features may be represented by a matrix.” This is an indication of where the acquisition channels are located to which can help determine how the channels relate (associate) with each other, these channels are for example, different electrodes attached to a brain. This is device dependent. Channel Region / region shape feature: par 70: “a channel region is defined based on the plane locations of two or more of the multiple acquisition channels.” The channel region is the region encompassed by the multiple acquisition channels which are associated with each other. For example par 73: “The computer device may specifically use each of the target acquisition channels as a region vertex to obtain multiple region vertexes, and connect the region vertexes in turn according to a clockwise direction or counterclockwise direction. The target channel region is a plane region surrounded by the region vertexes and vertex connection lines.” Where also par 75: “The region shape feature is feature information for describing a region shape of the target channel region.” PNG media_image1.png 131 157 media_image1.png Greyscale From figure 3B, where the examiner interprets the annotated region in red between channels ABC to be the channel region. With the region shape feature being an acute triangle. Initial Channel Association matrix: par 105: “Specifically, the initial channel association matrix may be an unweighted network, which only includes associations between the acquisition channels.“ this matrix is a Channel association feature in matrix form before normalization. Par 107 gives an example, where this example is an adjacency matrix with a 1 representing adjacency and 0 representing a lack of adjacency. Par 111 also gives an example where the matrix is weighted based on distance between the channels. Cell Matrix: par 116: “The cell matrix refers to a square matrix in which diagonal elements are non-zero and other elements are zero.“ Diagonal matrix corresponding to the initial channel association matrix: par 116: “The diagonal matrix refers to a matrix in which all elements except a main diagonal are zero. The diagonal matrix corresponding to the initial channel association matrix refers to a matrix in which elements on a main diagonal are determined according to matrix values of the initial channel association matrix, while elements except the main diagonal are all zero.” The values on the diagonal can be determined by different ways, but as shown by the equation in par 120, it is determined by summing up the elements along the row. Intermediate Channel association Matrix: par 116: “ Specifically, in order to subsequently embed the target channel association feature into the target time feature without losing respective electrophysiological signals of the acquisition channels, the computer device can acquire the cell matrix, fuse the cell matrix and the initial channel association matrix to obtain an intermediate channel association matrix. For example, the cell matrix is a square matrix in which elements on the diagonal are 1 and other elements are 0. The matrix dimensionality of the cell matrix is the channel quantity of the acquisition channels on the target acquisition device. The computer device can perform matrix addition processing on the cell matrix and the initial channel association matrix to obtain an intermediate channel association matrix. Because the elements on the diagonal of the initial channel association matrix are 0, if the initial channel association matrix is not fused with the cell matrix, the target embedded feature obtained by subsequent calculation does not include related information of the respective electrophysiological signals of the acquisition channels, which is not conducive to the classification of electrophysiological signals. “ The elements on the diagonal equivalent to 0 is understood by the examiner as the elements corresponding to the adjacency matrix of each acquisition channel with itself. See figure from par 107 below annotated by the examiner in red. Where this is an example of the initial adjacency matrix showing the diagonal of zero. PNG media_image2.png 209 326 media_image2.png Greyscale As understood the cell matrix in this context would be a matrix with all 1’s across that diagonal and zeros everywhere else, where the intermediate channel association matrix is these two matrices added to each other as shown in the math in par 120. Target Channel Association Matrix: The result of normalization. After following some multiplication between the diagonal matrix and intermediate channel association matrix. This results in a matrix which each adjacency is represented without biasing based on the number of connections. PNG media_image3.png 179 636 media_image3.png Greyscale ------ time features -------------------- claims 9-11 Time Feature: Based on broadest reasonable interpretation. A time feature is a feature (variable) measured across a certain amount of time. This interpretation matches with the understanding of sub-features. Temporal Convolutional Kernel: Based on broadest reasonable interpretation in light of the specifications, a temporal convolutional kernel is a kernel that acts across a signal through time. This kernel takes in the values across a window. Sub-features: A sub-feature corresponding to the Kernal is understood by the examiner under broadest reasonable interpretation in light of the specification as time feature with length equal to the window of the temporal convolutional kernel which is the output of the kernel. Intermediate time feature: par 123: “generating an intermediate time feature based on the multiple time sub-features corresponding to the same temporal convolution kernel, to obtain at least one intermediate time feature corresponding to the at least one temporal convolution kernel respectively;” It seems that the intermediate time feature is the output of the kernel across the full time feature. Where multiple sub features are generated when moving the kernel across the windows of the time feature and when the sub features are combined this is referred to as the intermediate time feature. ---Spatial Features --------- claim 12 Embedded Feature: The embedded feature as defined in the claims includes both the time feature and the channel association features. This is likely a result of a matrix calculation. Spatial convolutional kernel: This kernel is used to get from the embedded feature to the spatial features. Intermediate Spatial Feature: par 139: “Specifically, different spatial convolution kernels are used for extracting spatial features with different viewing angles. In the process of extracting the target spatial feature corresponding to the target embedded feature, the computer device can perform spatial feature extraction on the target embedded feature based on the spatial convolution kernel to obtain an intermediate spatial feature. Because there are multiple spatial convolution kernels, respective intermediate spatial features corresponding to the spatial convolution kernels can be obtained finally. “ An intermediate special feature is the result of applying the spatial kernel to one embedded feature. Spatial Feature: The result of adding up the different intermediate spatial features corresponding to different kernels. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4, 5, 6, 7, 8, 18, 19, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 4 and 18 recites the limitation " generating the channel association feature by ." There is insufficient antecedent basis for this limitation in the claim. That is because claim 1 only introduces acquiring a channel association feature not generating one. Perhaps this was meant to be written as “where the acquired channel association feature is generated by” Claims 4 and 18 state “ defining a channel region based on the plane locations of two or more of the multiple acquisition channels;” … “associating the two or more of the multiple acquisition channels in response to a determination that a region shape feature of the channel region is a preset shape feature and that there is no other acquisition channel in the channel region” Under broadest reasonable interpretation, this claim limitation encompasses a channel region defined by two channels. A channel region defined by the plane location of two acquisition devices would be a straight line. A straight line does not have a “region shape” .A straight line also does not contain a “region” as understood by the specifications, as a region implies an area. Therefore, this claim is indefinite as it is not understood what “region shape” , and “channel region” mean in the context of a straight line which this claim encompasses under broadest reasonable interpretation. For the purpose of examination, the examiner understands the limitation in context of a straight line with respect to the prior art as “associating the two or more of the multiple acquisition channels (two channels) in response to a determination that a region shape feature (a line) of the channel region is a preset shape feature (the line is straight and does not curve around other electrodes) and that there is no other acquisition channel in the channel region” (the line does not cross other channel electrodes). Claims 5 and 19 state: “wherein the region shape feature comprises vertex angles corresponding to region vertexes of the channel region, and the associating the two or more acquisition channels comprises: determining that the region shape feature corresponding to the channel region is the preset shape feature when the vertex angles are all within a reference angle range;” These claim depends on claim 4 and 18. Claims 4 and 18 allow a region shape defined by two acquisition channels. This region shape would be a straight line. In this context the term “vertex angles” is indefinite as it is not understood what they mean in the context of a straight line which this claim encompasses under broadest reasonable interpretation. This suggests that claim 5 requires the channel region to be defined by three or more acquisition channels. Therefore, for the purpose of mapping to prior art, the examiner interprets this claim as requiring that the region shape feature to be based on three or more acquisition channels as opposed to what claim 4 requires. Claims 6-8 and 20 are rejected due to their dependency on claims 4 and 18. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 and 15-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, an abstract idea, which has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Claim 1 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method … the method comprising: which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 1 recites: classification processing of an electrophysiological signal, Classification of an electrophysiological signal is performed by doctors to help treat patients and to determine their physiological states. This process as done by doctors involves observing an EEG signal and then determining what classification it belongs to. For example, in “How to Interpret an EEG and its report” by Marie Atkinson, MD (Atkinson_2010), Atikinson_2010 states that the process involves “First know what you are looking at on the screen” (ie: an observation). And then a determination based on that observation, for example in page 55 the presence of “Triphasic Waves” can indicate “metabolic or toxic encephalopathy.” The MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.“ Because this process as outlined occurs in the mind of a human doctor when performing classification, this claim recites a mental process. ; extracting a time feature corresponding to the electrophysiological signal As stated previously, this is a step in classification processing. Specifically, this step likely entails an identification of a feature of the EEG signal. For example “Triphasic waves” present in the Signal. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.“ Because this process as outlined occurs in the mind of a human doctor when performing classification, this claim recites a mental process. , and generating an embedded feature based on the channel association feature and the time feature; Generating an embedded feature based on the channel association feature and time feature under broadest reasonable interpretation in light of the specification involves a matrix calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Because this claim under broadest reasonable interpretation covers performing a matrix calculation, the claim falls within the mathematical concepts grouping. and extracting a spatial feature corresponding to the embedded feature extracting a spatial feature corresponding to the embedded feature under broadest reasonable interpretation in light of the specification involves a matrix calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Because this claim under broadest reasonable interpretation covers performing a matrix calculation, the claim falls within the mathematical concepts grouping. and obtaining a classification result corresponding to the electrophysiological signal based on the spatial feature. As stated previously, obtaining a classification result is a mental process done by doctors. The classification result based on the spatial feature, involves determining this classification based on the value on the spatial feature. This can reasonably be done by a healthcare professional, especially when it is understood that a spatial feature maps the time feature across the spatial dimension of a brain. Doctors already must interpret the spatial association across nodes when evaluating the EEG time features. See Atkinson_2010 page 4 evaluation of a brain montage, where the association of electrodes is used in classification. Therefore, this claim limitation, which is directed to the action of obtaining a classification result, is still performing the same mental process as outlined above. Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 1 additionally recites acquiring an electrophysiological signal collected by an acquisition device; This limitation states that the signal is first acquired and also collected by a device. This device is defined generically by its purpose “acquisition device”. The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” acquiring a channel association feature corresponding to the acquisition device, the channel association feature indicating spatial locations of multiple acquisition channels of the acquisition device, Acquiring a channel association feature corresponding to the acquisition device, where the channel association feature indicated spatial locations of the acquisition channels of the device is a form of insignificant activity. This is mere data gathering, where the data relates to the spatial location of channels. Firstly, MPEP 2106.05(g)(2) considers “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).” As stated previously, doctors are already known to consider the association between channels when doing classification, the channel association features corresponding to spatial location of acquisition channels is very related to the invention of EEG classification. Secondly the MPEP 2106.05(g)(3) considers “Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” It is found that all uses of the judicial exception require such data gathering. That is because acquiring a signal from channels where the channels correspond to regions of a human body’s anatomy requires that the location of channels be known to the physician. each of the multiple acquisition channels collecting the electrophysiological signal at a respective spatial location The use of an acquisition channel to collect an electrophysiological signal at a respective spatial location is the use of the acquisition channel (a generic computer component) for its ordinary task (receive data). The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated in Step 2A Prong 2, The limitation of acquiring a channel association feature corresponding to the acquisition device, the channel association feature indicating spatial locations of multiple acquisition channels of the acquisition device, was found to be insignificant activity under mere data gathering. In step 2B the examiner considers whether the extra-solution limitation is well known. An example use of acquiring channel association features occurs in EEG classification. Atkinson_2010 page 3 states that to read an EEG you must “First know what you are looking at on the screen. • 1. Montages • Electrodes • Channels” PNG media_image4.png 726 961 media_image4.png Greyscale Page 4 Atkinson_2010 This suggests that the use of a montage to decipher EEG readings is well known in the art, and that it is well known for physicians that they must consider node placement when reading EEG charts. The other limitations in the claim also do not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 1 is not eligible under 35 USC 101. Claim 2 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 2 recites and is configured to perform the extracting the time feature corresponding to the electrophysiological signal, As stated previously, this is a step in classification processing. Specifically, this step likely entails an identification of a feature of the EEG signal. For example “Triphasic waves” present in the Signal. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.“ Because this process as outlined occurs in the mind of a human doctor when performing classification, this claim recites a mental process. , and the generating the embedded feature based on the channel association feature and the time feature, Generating an embedded feature based on the channel association feature and time feature under broadest reasonable interpretation in light of the specification involves a matrix calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Because this claim under broadest reasonable interpretation covers performing a matrix calculation, the claim falls within the mathematical concepts grouping. and the extracting the spatial feature corresponding to the embedded feature extracting a spatial feature corresponding to the embedded feature under broadest reasonable interpretation in light of the specification involves a matrix calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Because this claim under broadest reasonable interpretation covers performing a matrix calculation, the claim falls within the mathematical concepts grouping. , and the obtaining the classification result corresponding to the electrophysiological signal based on the spatial feature. As stated previously, obtaining a classification result is a mental process done by doctors. The classification result based on the spatial feature, involves determining this classification based on the value on the spatial feature. This can reasonably be done by a healthcare professional, especially when it is understood that a spatial feature maps the time feature across the spatial dimension of a brain. Doctors already must interpret the spatial association across nodes when evaluating the EEG time features. See Atkinson_2010 page 4 evaluation of a brain montage, where the association of electrodes is used in classification. Therefore, this claim limitation, which is directed to the action of obtaining a classification result, is still performing the same mental process as outlined above. Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 2 additionally recites further comprising: acquiring an electrophysiological signal classification model corresponding to the acquisition device, The model in this context is a generic computing component defined by its function. Under broadest reasonable interpretation, a classification model is a program which runs math matrix calculations to determine a result. The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Therefore, the presence of a model to perform the tasks outlined in this claim do not integrate the exception into a practical application. wherein the electrophysiological signal classification model is based on the channel association feature corresponding to the acquisition device This limitation states that the model is based on the channel associations of an acquisition device. This limitation essentially states that the model is used within the field of physiological signals, since it states that the model is based on a acquisition device. MPEP 2106.05(h) states “limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application” Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated in Step 2A Prong 2, the additional elements do not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 2 is not eligible under 35 USC 101. Claim 3 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 2, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 3 recites wherein the acquiring the electrophysiological signal classification model comprises: determining a classification task; This limitation pertains to a determination. Determining a classification task is a mental process. For example, I determine that I would like to classify an EEG signal. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ Therefore this claim is directed to an abstract idea. and selecting the electrophysiological signal classification model corresponding to the acquisition device from the candidate electrophysiological signal classification models. This limitation pertains to a selection. Selecting a model from a candidate of models is the mental process of “making a choice.” For example, I would like to select the EEG classification model. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 3 additionally recites acquiring multiple candidate electrophysiological signal classification models corresponding to the classification task, Acquiring multiple models is acquiring multiple models (pieces of data) in order to pick between them. This is mere data gathering. When considering if a limitation is mere data gathering, The MPEP 2106.05(g)(2) considers “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).” This limitation is not significant, because acquiring electrophysiological signal classification models is very closely related to the invention of using an electrophysiological signal classification model to classify an electrophysiological signal. The MPEP 2106.05(g)(3) also considers “Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” The judicial exception of selecting between multiple models, necessarily requires that the models be acquired. Therefore, all recited uses of the judicial exception require such data gathering. Therefore, this limitation does not integrate the exception into a practical application. the candidate electrophysiological signal classification models having corresponding candidate acquisition devices; This limitation states that the models are based on the channel associations of an acquisition device. This limitation essentially states that the models are used within the field of physiological signals, since the models are based on an acquisition device. MPEP 2106.05(h) states “limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application” Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. acquiring multiple candidate electrophysiological signal classification models corresponding to the classification task, In step 2B, when determining if a limitation is insignificant, the MPEP 2106.05(g)(1) considers Whether the extra-solution limitation is well known. In “Motor Imagery Classification via Temporal Attention Cues of Graph Embedded EEGSignals” Zhang_2020 states Page 2576 col 1 par 1: “Lastly, the proposed G-CRAM is compared with two traditional EEG analysis methods, PSD-SVM [47] and FBCSP [22]. The PSD provides time-frequency features that are commonly used in traditional EEG motor imagery analysis. FBCSP isa widely used traditional method and has won several BCI competitions.” Which implies that multiple models are well known and are traditionally used for electrophysiological signal classification tasks. Based on the above facts, the office concludes that claim 3 is not eligible under 35 USC 101. Claim 4 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 4 recites: further comprising: generating the channel association feature by mapping spatial locations of the multiple acquisition channels to a same plane to obtain plane locations of the multiple acquisition channels; mapping spatial locations of the multiple acquisition channels to a same plane to obtain plane locations of the multiple acquisition channels is the process of taking channels which are organized in a 3D place and orienting them based on distance onto a 2D plane. This can involve calculating the distance between two points in a 3D object and mapping that distance onto a 2D plane. This is the equivalent of drawing the map of a globe as a 2D map. This can reasonably be performed on a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” defining a channel region based on the plane locations of two or more of the multiple acquisition channels; Defining a channel region based on the plane locations of two or more acquisition channels is drawing straight lines connecting the channels together to form a shape. This is a mental process of connecting dots. MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” associating the two or more of the multiple acquisition channels in response to a determination that a region shape feature of the channel region is a preset shape feature and that there is no other acquisition channel in the channel region; This claim limitation pertains to a determination. This determination observes the region shape created by the connecting dots, and passes a judgement that it fits a shape requirement and that there are no other dots in the region. MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” and generating the channel association feature based on the associated two or more acquisition channels. Generating the channel association feature first involves determining whether the two channels form a preset shape which contains no other channels in the region which is already determined to be a mental process. The generation is creating an adjacency matrix, and writing 1 for each adjacency. This process can be done with a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 4 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 4 is not eligible under 35 USC 101. Claim 5 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 4 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 5 recites: and the associating the two or more acquisition channels comprises: determining that the region shape feature corresponding to the channel region is the preset shape feature when the vertex angles are all within a reference angle range; This claim limitation pertains to a determination. This determination observes the region shape created by the connecting dots, and passes a judgement that it fits a shape and angle requirement. MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” and associating the two or more acquisition channels based on connection relationships between the region vertexes when there is no other acquisition channel in the channel region. This claim limitation pertains to a determination. This determination observes the region shape created by the connecting dots, and passes a judgement that there are no other dots inside the region. MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 5 recites , wherein the region shape feature comprises vertex angles corresponding to region vertexes of the channel region, It was determined under claim 4 and 5 that the region shape feature is a shape caused by the connection of multiple different channel acquisition nodes. As previously stated, this is an abstract idea since connecting dots to form a shape can easily be performed in the human mind. This limitation expands on the region shape, stating that the region shape comprises vertex angles corresponding to region vertexes of the channel region. As proposed, this limitation is insignificant extra solution activity. The MPEP 2106.05(g)(2) considers “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).” This invention relates to the drawing of shapes to determine whether or not channels are associated with each other. This process was found to be an abstract idea. Where the region shape comprises vertex angles is very related to the invention, since shapes are known to have angles. Therefore, this limitation is not significant since it is related to the abstract idea of the invention. The MPEP 2106.05(g)(3) also considers “Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” This invention relates to the drawing of shapes to determine whether or not channels are associated with each other. All shapes formed by connected nodes in straight lines contains vertex angles. Therefore, all uses of the recited judicial exception require that the shapes contain vertex lines which correspond to the channel region, and therefore, this limitation simply amounts to necessary outputting. Because this limitation pertains to insignificant activity, the claim does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 5 recites wherein the region shape feature comprises vertex angles corresponding to region vertexes of the channel region, When considering if a limitation is insignificant extra solution activity. In section 2B, the MPEP 2106.05(g)(1) considers Whether the extra-solution limitation is well known. It is well known that shapes made of straight lines contain vertex angles. Therefore this limitation is well known and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 5 is not eligible under 35 USC 101. Claim 6: Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 4 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 6 recites , wherein the generating the channel association feature comprises: generating an initial channel association feature based on the associated two or more acquisition channels; Generating the channel association feature first involves determining whether the two channels form a preset shape which contains no other channels in the region which is already determined to be a mental process. The generation is creating an adjacency matrix, and writing 1 for each adjacency. This process can be done with a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” and performing normalization processing on the initial channel association feature to obtain the channel association feature. This normalization process is a matrix calculation as determined form the specifications under broadest reasonable interpretation. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 6 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recite additional elements that amount to significantly more than the judicial exception. NO. Claim 6 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 6 is not eligible under 35 USC 101. Claim 7: Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 6, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 7 recites: wherein the performing the normalization processing comprises: generating an initial channel association matrix based on the initial channel association feature; Generating the initial channel matrix based on the initial channel association feature first involves determining whether the two channels form a preset shape which contains no other channels in the region which is already determined to be a mental process. The generation is creating an adjacency matrix, and writing 1 for each adjacency. This process can be done with a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” acquiring a cell matrix, and fusing the cell matrix and the initial channel association matrix to obtain an intermediate channel association matrix; A cell matrix is understood to be an identity matrix. This is a function of math, similar to saying “acquiring a number 1.” Fusing the cell matrix and initial channel association matrix to obtain an intermediate channel association matrix is understood in light of the specification as adding the initial matrix to the cell matrix. Conceptually, this is just treating each channel as associating with itself. Furthermore, this is a matrix mathematic calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” acquiring a diagonal matrix corresponding to the initial channel association matrix, and fusing the diagonal matrix and the intermediate channel association matrix to obtain a target channel association matrix; Acquiring the diagonal matrix corresponding to the initial channel association matrix is understood in light of the specification as a summation math equation where the terms of each row of the matrix are added up and then are diagonally displayed onto the matrix. This is a matrix calculation. Fusing the diagonal matrix and intermediate channel association matrix to obtain a target channel association matrix is a matrix calculation involving matrix addition or multiplication. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 7 states and obtaining the channel association feature based on the target channel association matrix. This final target channel association matrix provides the channel association feature. This is just obtaining the result of the previous calculation. The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated above, Claim 7 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 7 is not eligible under 35 USC 101. Claim 8: Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 7 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 8 recites wherein the generating the initial channel association matrix comprises any one of. determining matrix dimensionality of the initial channel association matrix based on a channel quantity of the acquisition channels on the acquisition device, As stated in previous claims, generating an initial channel association matrix is an abstract idea of building a matrix. The dimensions being based on a channel quantity of acquisition devices simply means that the matrix dimensions are based on the number of channels. This is standard procedure for building an adjacency matrix. Therefore, this limitations still pertains to the abstract idea of building an adjacency matrix which can be performed with a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” , setting matrix values corresponding to the associated acquisition channels to a first preset threshold, and setting matrix values corresponding to other acquisition channels to a second preset threshold, to obtain the initial channel association matrix; This limitation pertains to the values inside the adjacency matrix. For example, a first preset threshold can be 1, and a second present threshold can be 0. This limitation is directed to the judgement of what the values inside the matrix should be set to. The MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ or determining matrix dimensionality of the initial channel association matrix based on a channel quantity of the acquisition channels on the acquisition device As stated in previous claims, generating an initial channel association matrix is an abstract idea of building a matrix. The dimensions being based on a channel quantity of acquisition devices simply means that the matrix dimensions are based on the number of channels. This is standard procedure for building an adjacency matrix. Therefore, this limitations still pertains to the abstract idea of building an adjacency matrix which can be performed with a pen and paper. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” , determining matrix values corresponding to the associated acquisition channels based on spatial location distances between the associated acquisition channels, This limitation pertains to the values inside the adjacency matrix based on distance. For example each value in the matrix will be set to the inverse of the distance. This limitation is directed to the judgement of what the values inside the matrix should be set to and perhaps a mental calculation based on what formula will be used to determine the value based on distance. The MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ and setting matrix values corresponding to other acquisition channels to the second preset threshold, to obtain the initial channel association matrix. This limitation pertains to the values inside the adjacency matrix. For example, a first preset threshold can be the distances, and a second present threshold can be 0. This limitation is directed to the judgement of what the values inside the matrix should be set to. The MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 8 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recite additional elements that amount to significantly more than the judicial exception. NO. Claim 8 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 8 is not eligible under 35 USC 101. Claim 9 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 9 recites: and the extracting the time feature comprises: acquiring at least one temporal convolution kernel; A temporal convolution kernel is a weighted time window. This is the equivalent of acquiring a mathematic function. Acquiring one temporal convolutional kernel is actually a judgement to determine what the kernel used should be, rather than a device being acquired. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ separately extracting time sub-features of each of the electrophysiological signals based on a same temporal convolution kernel to obtain multiple time sub-features corresponding to each of the at least one temporal convolution kernel; This claim limitation covers performing mathematic calculations on a signal. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” generating an intermediate time feature based on the multiple time sub-features corresponding to the same temporal convolution kernel to obtain at least one intermediate time feature corresponding to the at least one temporal convolution kernel respectively This limitation seems to be about summing or combining all the different sub-features at different time windows into the same time feature. This is a mathematic calculation. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” ; and obtaining the time feature based on the at least one intermediate time feature. The time feature is obtained from the intermediate time feature. It is understood that this “obtaining” is a mathematic result derived from the intermediate time feature matrix. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 9 additionally recites , wherein the electrophysiological signal comprises electrophysiological signals corresponding to each of the multiple acquisition channels respectively, wherein the electrophysiological signal comprises electrophysiological signals corresponding to each of the multiple acquisition channels respectively is insignificant activity. When considering if a limitation is insignificant, the MPEP 2106.05(g)(2) considers “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).” The signal corresponding to each acquisition channel is related to the invention of classifying saif signal. The MPEP 2106.05(g)(3) also considers “Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” This limitation amounts to necessary data gathering, as electrophysiological signals (like an EEG) comprise signals corresponding to each channel respectively. See Atkinson_2010 slide 5 for standard EEG that shows signals corresponding to the difference in voltage of different channels. PNG media_image5.png 732 988 media_image5.png Greyscale Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated in Step 2A Prong 2, wherein the electrophysiological signal comprises electrophysiological signals corresponding to each of the multiple acquisition channels respectively, is insignificant activity. The MPEP 2106.05(g)(1) considers Whether the extra-solution limitation is well known. As outlined above, EEG readings include signals corresponding to each of the multiple acquisition channels respectively as standard. Based on the above facts, the office concludes that claim 9 is not eligible under 35 USC 101. Claim 10 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 10 recites: wherein the time feature comprises multiple intermediate time features, and the generating the embedded feature comprises: separately embedding the channel association feature into the intermediate time features to obtain initial embedded features corresponding to the intermediate time features; As outlined, this claim limitation pertains to matrix mathematic calculations. The statement “wherein the time feature comprises multiple intermediate time features” defines what an intermediate time feature is. The statement “: separately embedding the channel association feature into the intermediate time features to obtain initial embedded features corresponding to the intermediate time features;” in light of the specifications is matrix calculations used to solve for an initial embedded feature. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” and obtaining the embedded feature based on the initial embedded features. This claim limitation is some further mathematic calculation used to derive an embedded feature based on an initial feature. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 10 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 10 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 10 is not eligible under 35 USC 101. Claim 11 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 10 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 11 recites , wherein the channel association feature comprises association sub-features corresponding to the acquisition channels respectively, and the intermediate time features comprise time sub-features corresponding to the acquisition channels respectively; and the separately embedding the channel association feature into the intermediate time features to obtain the initial embedded features corresponding to the intermediate time features comprises: in a current intermediate time feature, embedding an association sub-feature corresponding to a same acquisition channel into a corresponding time sub-feature, to obtain embedded sub- features corresponding to the acquisition channels respectively; The limitations , wherein the channel association feature comprises association sub-features corresponding to the acquisition channels respectively, and the intermediate time features comprise time sub-features corresponding to the acquisition channels respectively introduce channel association sub features and time sub-features. The process of embedding an association sub-feature corresponding to a same acquisition channel into a corresponding time sub-feature, to obtain embedded sub- features corresponding to the acquisition channels respectively encompasses a matrix calculation as understood from the specifications. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” and obtaining an initial embedded feature corresponding to the current intermediate time feature based on the embedded sub-features. This claim limitation is some further mathematic calculation used to derive an initial embedded feature based on intermediate time features. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 11 does not additionally recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 11 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 11 is not eligible under 35 USC 101. Claim 12 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1, which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 12 recites: wherein the extracting the spatial feature comprises: acquiring at least one spatial convolution kernel; A spatial convolution kernel is a weighted spatial window. This is the equivalent of acquiring a mathematic function. Acquiring one spatial convolutional kernel is actually a judgement to determine what the kernel used should be, rather than a device being acquired. MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ performing spatial feature extraction on the embedded feature based on the at least one spatial convolution kernel to obtain at least one intermediate spatial feature corresponding to the at least one spatial convolution kernel respectively; this process as described is a matrix calculation where the spatial kernel is being applied across the embedded feature matrix. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” and obtaining the spatial feature based on the at least one intermediate spatial feature. Obtaining the spatial feature based on the intermediate spatial feature involves some additional mathematic step. For example, adding up multiple intermediate spatial features to form a spatial feature. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 12 does not additionally recite elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 12 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 12 is not eligible under 35 USC 101. Claim 13 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites The method according to claim 1 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 13 recites wherein the obtaining the classification result comprises: performing nonlinear processing on the spatial feature to obtain a target fitting feature; The broadest reasonable interpretation of nonlinear processing of a spatial feature (a matrix) is a mathematic calculation performed on the spatial feature matrix to receive a target fitting feature. MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” , and performing classification processing on the target fitting feature to obtain the classification result. Performing classification processing on the target fitting feature to obtain a classification result implies some mathematic calculation used to determine a classification result. For example, some mathematic calculation performed on the target fitting feature when the feature is over a preset value may result in a value that indicates a heightened emotional state. This statement when claimed in a broad manner encompasses a mathematic calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 13 does not recite additional elements that integrate the judicial exception into a practical application? Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 13 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 13 is not eligible under 35 USC 101. Claim 15 Claim 15 is effectively the same as claim 1, except that the claim is directed to a machine rather than a process. Claim 15 also has the additional limitation of processing circuitry configured to acquire an electrophysiological signal collected by an acquisition device The presence of processing circuitry configured to acquire data is the use of a computing device for its regular purpose (ie: receiving some data). The The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Therefore the claim is also directed to a judicial exception and does not integrate the judicial exception or amount to significantly more. Based on the above facts, the office concludes that claim 15 is not eligible under 35 USC 101. Claim 16 Claim 16 is an effective duplicate of claim 2 with the only difference being that it depends on claim 15. Due to the reasons discussed on claim 2 and claim 15, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 16 is not eligible under 35 USC 101. Claim 17 Claim 17 is an effective duplicate of claim 3 with the only difference being that it depends on claim 15. Due to the reasons discussed on claim 3 and claim 15, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 17 is not eligible under 35 USC 101. Claim 18 Claim 18 is an effective duplicate of claim 4 with the only difference being that it depends on claim 15. Due to the reasons discussed on claim 4 and claim 15, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 18 is not eligible under 35 USC 101. Claim 19 Claim 19 is an effective duplicate of claim 5 with the only difference being that it depends on claim 18. Due to the reasons discussed on claim 5 and claim 18, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 19 is not eligible under 35 USC 101. Claim 20 Claim 20 is an effective duplicate of claim 6 with the only difference being that it depends on claim 18. Due to the reasons discussed on claim 6 and claim 18, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 20 is not eligible under 35 USC 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6-8, 10-18, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Motor Imagery Classification via Temporal Attention Cues of Graph Embedded EEG Signals.” By Zhang et al. (Zhang_2020). Claim 1:Zhang_2020 teaches A method for classification processing of an electrophysiological signal, the method comprising: (page 2570 col 1 par 2: Motor imagery classification is the basic to a BCI, which supports motor rehabilitation of post-stroke patients [1]. The EEG signals, (Examiner note: an electrophysiological signal) which are captured from a human’s scalp and thus reflect the electrical activities of human the cortex, is one of the most active physiological cues to build a BCI system. … page 2572 col 1 par 3: “The beep and cue are used to notice and indicate the subject to perform the motor imagery task. The duration of motor imagery is of research interest. Formally, the duration of interest is T-second long. Each of the n EEG nodes has a sensor recording sequence ri∈[1,n] = [si 1,si 2,...,si k] ∈ Rk through k = T × f time points, where f is the sampling frequency and si t is the measurement of the ith EEG sensor at the time point t. Thus the raw EEG features of the trial T is a two-dimensional(2D) tensor XT =[r1;r2;...;rn] ∈ Rn×k with one dimension representing EEG node and the other representing time series. Our goal is to make motor imagery classification of the EEG trials XT.” acquiring an electrophysiological signal collected by an acquisition device; (page 2570 col 1 par 2: “Researchers have widely explored the EEG-based BCI due to its zero clinical risks as well as portable and cost-effective acquisition devices.” … page 2572 col 1 par 3: “The beep and cue are used to notice and indicate the subject to perform the motor imagery task. The duration of motor imagery is of research interest. Formally, the duration of interest is T-second long. Each of the n EEG nodes has a sensor recording sequence) acquiring a channel association feature corresponding to the acquisition device, the channel association feature indicating spatial locations of multiple acquisition channels of the acquisition device, each of the multiple acquisition channels collecting the electrophysiological signal at a respective spatial location; page 2572 col 2 par 2: “In the node dimension of XT, one EEG node at most has two neighbors. Such a representation is limited to reflect the real-world situation where an EEG node usually has multiple neighboring nodes acquiring EEG signals of a certain brain area. Thus representing the relations of different EEG nodes is essential to successful EEG analysis. In our work, we leverage the EEG node positioning to form graph representations of EEG nodes, which include spatial information of the natural EEG node. In particular ,we construct an undirected spatial graph G =(V,E) on the EEG node positioning. The node set V = {si|i ∈ [1,n]} includes all the EEG nodes in an experiment. Depending on the structure of the adjacency matrix of EEG nodes, we design three EEG representation graphs: N-Graph (NG), D-Graph (DG), and S-Graph (SG). The graph definition enhances the brain area representation ability of EEG signals but decreases the effect of noise on each EEG node by combining neighboring nodes to represent the central one. This design also empowers the EEG representations to be robust to missing value issues by embedding each EEG node with the assist of its neighboring nodes instead of only relying on the measurement of itself. 1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes.” extracting a time feature corresponding to the electrophysiological signal, (page 2572 col 1 par 3: “Formally, the duration of interest is T-second long. Each of the n EEG nodes has a sensor recording sequence ri∈[1,n] = [si 1,si 2,...,si k] ∈ Rk through k = T × f time points, where f is the sampling frequency and si t is the measurement of the ith EEG sensor at the time point t.Thus the raw EEG features of the trial T isa two-dimensional(2D)tensorXT =[r1;r2;...;rn] ∈ Rn×k with one dimension representing EEG node and the other representing time series.” (Examiner note: Where these raw EEG features representing time series is considered as a time feature.) and generating an embedded feature based on the channel association feature and the time feature; page 2573 fig2 description: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification. We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions (Examiner note: channel association feature); then we apply a sliding window technique to crop continuous EEG sequences into temporal slices (Examiner note: Where this is an embedded feature that includes the time feature data and association channel data) and utilize a CNN layer to extract spatio-temporal features of each slice; (Examiner note: Where utilizing a CNN to extract spatio-temporal features is the use of the embedded feature to generate special features) a recurrent attention layer is used to extract the attentive temporal dynamic features; lastly the extracted features are classified to the target using a dense layer and a standard softmax classifier.” and extracting a spatial feature corresponding to the embedded and obtaining a classification result corresponding to the electrophysiological signal based on the spatial feature. (page 2573 fig2 description: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification. We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions (Examiner note: channel association feature); then we apply a sliding window technique to crop continuous EEG sequences into temporal slices (Examiner note: Where this is an embedded feature that includes the time feature data and association channel data) and utilize a CNN layer to extract spatio-temporal features of each slice; (Examiner note: Where utilizing a CNN to extract spatio-temporal features is the use of the embedded feature to generate special features) a recurrent attention layer is used to extract the attentive temporal dynamic features; lastly the extracted features are classified to the target using a dense layer and a standard softmax classifier.” ) PNG media_image6.png 714 1619 media_image6.png Greyscale Examiner note: Examiner provides an annotated figure depicted the workflow of claim 1 with respect to the prior art. Claim 2:The method according to claim 1, further comprising: acquiring an electrophysiological signal classification model corresponding to the acquisition device, page 2573 fig 2: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification.” see also table II on page 2575 which compares different models, implies that a model is acquired, where all the above models use/correspond to an acquisition device. wherein the electrophysiological signal classification model is based on the channel association feature corresponding to the acquisition device, and is configured to perform (page 2578 col 1 par 2: “given an EEG headset, the coordinates (locations) of its EEG nodes would be fixed, and consequently, the graph representation could be achieved. Therefore, the proposed graph representation approach is adaptive to different amounts of EEG nodes” Examiner note: this also implies that the channel association feature corresponds to the device (headset) as well. ) the extracting the time feature corresponding to the electrophysiological signal, and the generating the embedded feature based on the channel association feature and the time feature, (page 2573 fig2 description: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification. We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions (Examiner note: channel association feature); then we apply a sliding window technique to crop continuous EEG sequences into temporal slices (Examiner note: Where this is an embedded feature that includes extracting the time feature data and association channel data) and utilize a CNN layer to extract spatio-temporal features of each slice; (Examiner note: Where utilizing a CNN to extract spatio-temporal features is the use of the embedded feature to generate special features) a recurrent attention layer is used to extract the attentive temporal dynamic features; lastly the extracted features are classified to the target using a dense layer and a standard softmax classifier.” ) and the extracting the spatial feature corresponding to the embedded feature, and the obtaining the classification result corresponding to the electrophysiological signal based on the spatial feature. (page 2573 fig2 description: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification. We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions (Examiner note: channel association feature); then we apply a sliding window technique to crop continuous EEG sequences into temporal slices (Examiner note: Where this is an embedded feature that includes the time feature data and association channel data) and utilize a CNN layer to extract spatio-temporal features of each slice; (Examiner note: Where utilizing a CNN to extract spatio-temporal features is the use of the embedded feature to generate special features) a recurrent attention layer is used to extract the attentive temporal dynamic features; lastly the extracted features are classified to the target using a dense layer and a standard softmax classifier.” ) Claim 3:The method according to claim 2, wherein the acquiring the electrophysiological signal classification model comprises: determining a classification task; (page 2578 col 1 par 6: “This paper targets the EEG motor imagery classification task and proposes a novel deep learning approach.”) acquiring multiple candidate electrophysiological signal classification models corresponding to the classification task, the candidate electrophysiological signal classification models having corresponding candidate acquisition devices; (page 2574 col 2 par 2: “The PhysioNet dataset and BCI CIV2a dataset we used are roughly balanced. Thus we evaluate the proposed model with classification accuracy and the Area Under ROC Curve (ROC-AUC). Table II presents the overall comparison results and the detailed results can be found in the supporting documents. Because deep learning is an advanced technique that relies on proper structure design ,we compare with several deep learning approaches with various model structures and feature embedding strategies. To make a fair comparison and show the superior structure of the proposed approach, the most recent state-of-the-art approaches whose implementation code is available online are selected for comparison.” Examiner note: Where the comparison of different models in table II for a same task makes implies acquiring those models for the same task. Where all models correspond to standard acquisition device, as implied by page 2578 col 1 par 2: “given an EEG headset, the coordinates (locations) of its EEG nodes would be fixed, and consequently, the graph representation could be achieved. Therefore, the proposed graph representation approach is adaptive to different amounts of EEG nodes” and selecting the electrophysiological signal classification model corresponding to the acquisition device from the candidate electrophysiological signal classification models. (page 2573 fig2 description: “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification (Examiner note: Where the use of this model implies the selection of that model for the task amongst a plurality of options.). We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions (Examiner note: corresponds to the acquisition device) ; then we apply a sliding window technique to crop continuous EEG sequences into temporal slices and utilize a CNN layer to extract spatio-temporal features of each slice; a recurrent attention layer is used to extract the attentive temporal dynamic features; lastly the extracted features are classified to the target using a dense layer and a standard softmax classifier.” ) Claim 4:generating the channel association feature by mapping spatial locations of the multiple acquisition channels to a same plane to obtain plane locations of the multiple acquisition channels; page 2572 col 2 par 3: “1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes. In the 2D position projection (Fig. 3(b)), each node has several naturally neighbors (up, down, left, right, up-left, up-right, down-left, and down right); for example, the node s11 has eight neighboring nodes (s3,s4,s5,s12,s19,s18,s17,s10). Based on this observation, we build a connection between two naturally neighboring EEG nodes.” PNG media_image7.png 321 520 media_image7.png Greyscale defining a channel region based on the plane locations of two or more of the multiple acquisition channels; page 2572 col 2 par 3: “1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes. In the 2D position projection (Fig. 3(b)), each node has several naturally neighbors (up, down, left, right, up-left, up-right, down-left, and down right); for example, the node s11 has eight neighboring nodes (s3,s4,s5,s12,s19,s18,s17,s10). Based on this observation, we build a connection between two naturally neighboring EEG nodes.” Examiner note: please see claim interpretation. Where a channel region is interpreted as a region where there are connected nodes. Please also see figure below annotated in red. PNG media_image8.png 754 866 media_image8.png Greyscale associating the two or more of the multiple acquisition channels in response to a determination that a region shape feature of the channel region is a preset shape feature and that there is no other acquisition channel in the channel region; page 2572 col 2 par 3: “1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes. In the 2D position projection (Fig. 3(b)), each node has several naturally neighbors (up, down, left, right, up-left, up-right, down-left, and down right); for example, the node s11 has eight neighboring nodes (s3,s4,s5,s12,s19,s18,s17,s10). Based on this observation, we build a connection between two naturally neighboring EEG nodes.” Examiner note: Where the requirement that a node build a connection with only its neighbor is interpreted as forming a region shape where there is no other acquisition channel in the channel region. Where the requirement that a connection be between two nodes limits the preset shape feature to a straight line. and generating the channel association feature based on the associated two or more acquisition channels. Page 2572 col 2 par 3: “Based on this observation, we build a connection between two naturally neighboring EEG nodes. Formally, the edge set can be denoted as Ev = {sisj|(i, j) ∈ H}, where H is the set of naturally neighboring EEG nodes. We also regard each node as connecting to itself. We can define the adjacency matrix of the N-Graph as a square matrix |V|×|V| with its binary element representing whether two EEG nodes are neighboring to each other:” Examiner note: Where this adjacency matrix is the channel association feature. Claim 6:The method according to claim 4, wherein the generating the channel association feature comprises: generating an initial channel association feature based on the associated two or more acquisition channels; (page 2572 col 2 par 3: “1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes. In the 2D position projection (Fig. 3(b)), each node has several naturally neighbors (up, down, left, right, up-left, up-right, down-left, and down right); for example, the node s11 has eight neighboring nodes (s3,s4,s5,s12,s19,s18,s17,s10). Based on this observation, we build a connection between two naturally neighboring EEG nodes. Formally, the edge set can be denoted as Ev = {sisj|(i, j) ∈ H}, where H is the set of naturally neighboring EEG nodes. We also regard each node as connecting to itself. We can define the adjacency matrix of the N-Graph as a square matrix |V|×|V| with its binary element representing whether two EEG nodes are neighboring to each other:” Examiner note: Where this is an initial channel association feature pre-normalization. and performing normalization processing on the initial channel association feature to obtain the channel association feature. (page 2572 par 4: “We then follow the spectral graph theory [31] to normalize the adjacency matrix: … Then the N-Graph representation Zv of raw EEG signals is the matrix product of the normalized N-Graph adjacency matrix ˆAv andthe raw EEG trial XT”) Claim 7:The method according to claim 6, wherein the performing the normalization processing comprises: generating an initial channel association matrix based on the initial channel association feature; (page 2572 col 2 par 3: “1) N-Graph: Fig. 3 shows an example positioning of 64-channel EEG nodes. In the 2D position projection (Fig. 3(b)), each node has several naturally neighbors (up, down, left, right, up-left, up-right, down-left, and down right); for example, the node s11 has eight neighboring nodes (s3,s4,s5,s12,s19,s18,s17,s10). Based on this observation, we build a connection between two naturally neighboring EEG nodes. Formally, the edge set can be denoted as Ev = {sisj|(i, j) ∈ H}, where H is the set of naturally neighboring EEG nodes. We also regard each node as connecting to itself. We can define the adjacency matrix of the N-Graph as a square matrix |V|×|V| with its binary element representing whether two EEG nodes are neighboring to each other:) acquiring a cell matrix, and fusing the cell matrix and the initial channel association matrix to obtain an intermediate channel association matrix; 2572 col 2 par 3: “We also regard each node as connecting to itself. We can define the adjacency matrix of the N-Graph as a square matrix |V|×|V| with its binary element representing whether two EEG nodes are neighboring to each other:” Examiner note: please see claim interpretation. Based on the examiners understanding of the specifications, each node connecting to itself in the prior art is short hand for acquiring a cell matrix of 1’s in the diagonals and fusing it with the initial association matrix. See also the equation shown below which outlines this process, where A~ = A + I (where I is interpreted as an identity matrix which is the same as a cell matrix as defined in this applications specifications). See also claim interpretation which outlines this process with respect to this applications specifications. PNG media_image9.png 260 456 media_image9.png Greyscale acquiring a diagonal matrix corresponding to the initial channel association matrix, and fusing the diagonal matrix and the intermediate channel association matrix to obtain a target channel association matrix; See math equations below from page 2572 which match this process as described. Where the diagonal matrix is given as D, and the fusion is given as A^. See also claim interpretation which outlines this process with respect to this applications specifications. PNG media_image10.png 260 445 media_image10.png Greyscale and obtaining the channel association feature based on the target channel association matrix. PNG media_image11.png 260 445 media_image11.png Greyscale Examiner note: Where the examiner under broadest reasonable interpretation understands that the target channel association matrix is an example of a target channel association feature. Claim 8:The method according to claim 7, wherein the generating the initial channel association matrix comprises any one of: determining matrix dimensionality of the initial channel association matrix based on a channel quantity of the acquisition channels on the acquisition device, setting matrix values corresponding to the associated acquisition channels to a first preset threshold, and setting matrix values corresponding to other acquisition channels to a second preset threshold, to obtain the initial channel association matrix; or determining matrix dimensionality of the initial channel association matrix based on a channel quantity of the acquisition channels on the acquisition device, (page 2572 col 2 par 1-2: “The node set V = {si|i ∈ [1,n]} includes all the EEG nodes in an experiment. … We also regard each node as connecting to itself. Wecan define the adjacency matrix of the N-Graph as a square matrix |V|×|V| with its binary element representing whether two EEG nodes are neighboring to each other”) determining matrix values corresponding to the associated acquisition channels based on spatial location distances between the associated acquisition channels (page 2573 col 1 par 2: “Considering the above disadvantages, we define a distance based EEG graph called D-Graph, which uses the real-world 3D distance between EEG nodes rather than the binary connections between naturally neighboring nodes. The adjacency matrix of D-Graph has the distance between two neighboring EEG nodes as its element instead of binary elements indicating neighboring or not.”), and setting matrix values corresponding to other acquisition channels to the second preset threshold, (page 2573 col 1 par 2: “In practice, two issues should be addressed before constructing the adjacency matrix: 1) how to define neighboring nodes; 2) how to define the distance between a node and itself. For the first problem, we regard the two EEG nodes are neighboring if the distance between two nodes is smaller than the average value of the distance set L. For the second problem, the distance between a node and itself is defined as the average distance of other neighboring nodes to this node. Therefore, we define the elements of the adjacency matrix Ad as: “ Examiner note: See attached figure which depicts adjacent nodes having a value according to distance and non-neighboring/adjacent nodes having a preset value of zero. PNG media_image12.png 130 768 media_image12.png Greyscale to obtain the initial channel association matrix. Page 2573 col 2 par 1: “Therefore, we define the elements of the adjacency matrix Ad as: “ Claim 10:The method according to claim 1, wherein the time feature comprises multiple intermediate time features, (page 2573 fig.2.: “we apply a sliding window technique to crop continuous EEG sequences into temporal slices” Examiner note: intermediate time features) and the generating the embedded feature comprises: separately embedding the channel association feature into the intermediate time features to obtain initial embedded features corresponding to the intermediate time features; page 2574 col 1 par 2: “Let the interval between two neighbouring slices be p, then m = int((k − w)/p) slices are obtained from one EEG trial. We specifically design a CNN to encode the spatio-temporal information within a temporal slice.” Examiner note: Where the information encoded within a temporal splice implies that the information is embedded into each intermediate time feature (ie: slice) separately. See annotated figure below. and obtaining the embedded feature based on the initial embedded features. See annotated figure 2. Page 2573 below PNG media_image13.png 377 1154 media_image13.png Greyscale Claim 11:The method according to claim 10, wherein the channel association feature comprises association sub-features corresponding to the acquisition channels respectively, (page 2572 col 1 par 4L “Formally, the duration of interest is T-second long. Each of the n EEG nodes has a sensor recording sequence ri∈[1,n] = [si 1,si 2,...,si k] ∈ Rk through k = T × f time points, where f is the sampling frequency and si t is the measurement of the ith EEG sensor at the time point t.”) and the intermediate time features comprise time sub-features corresponding to the acquisition channels respectively; (page 2572 col 1 par 4: “.Thus the raw EEG features of the trial T is a two-dimensional(2D) tensor XT =[r1;r2;...;rn] ∈ Rn×k with one dimension representing EEG node and the other representing time series.”) Examiner note: Where this shows that the time feature comprises a sub feature that corresponds to each EEG node. and the separately embedding the channel association feature into the intermediate time features to obtain the initial embedded features corresponding to the intermediate time features comprises: in a current intermediate time feature, embedding an association sub-feature corresponding to a same acquisition channel into a corresponding time sub-feature, to obtain embedded sub- features corresponding to the acquisition channels respectively; page 2574 col 1 par 2: “Although deep networks have strong learning abilities, deeper is not always better for EEG analysis [43]. Table I gives the detailed configuration of the proposed spatio-temporal encoding network. We use one CNN layer and one pooling layer. The height of the CNN kernel is set to n, same to the amount of EEG nodes, for considering all EEG nodes at once. The width of the kernel is extended to 45 for exploring long temporal dynamics. The output amount of CNN filters is empirically set to 40. The convolutional filtering thus can uncover the spatio-temporal information across different EEG nodes.” Examiner note: Where as the examiner understands, time sub-features corresponding to acquisition channels is the retrieval of the temporal information across the different EEG nodes. Where this process is occurring as part of the embedding step, see figure 2 below as well for visualization. PNG media_image14.png 368 1154 media_image14.png Greyscale and obtaining an initial embedded feature corresponding to the current intermediate time feature based on the embedded sub-features. See annotated figure 2. Page 2573 below PNG media_image13.png 377 1154 media_image13.png Greyscale Claim 12:The method according to claim 1, wherein the extracting the spatial feature comprises: acquiring at least one spatial convolution kernel; (page 2574 col 1 par 3: “Although deep networks have strong learning abilities ,deeper is not always better for EEG analysis [43]. Table I gives the detailed configuration of the proposed spatio-temporal encoding network. We use one CNN layer and one pooling layer. The height of the CNN kernel is set to n, same to the amount of EEG nodes, for considering all EEG nodes at once. The width of the kernel is extended to 45 for exploring long temporal dynamics. The output amount of CNN filters is empirically set to 40. The convolutional filtering thus can uncover the spatio-temporal information across different EEG nodes. Examiner note: See claim 1 for mapping, where the “spatio-temporal encoding” is interpreted as extracting spatial features from the embedded layer. performing spatial feature extraction on the embedded feature based on the at least one spatial convolution kernel to obtain at least one intermediate spatial feature corresponding to the at least one spatial convolution kernel respectively; page 2574 col 1 par 3: “We use one CNN layer and one pooling layer. The height of the CNN kernel is set to n, same to the amount of EEG nodes, for considering all EEG nodes at once. The width of the kernel is extended to 45 for exploring long temporal dynamics. The output amount of CNN filters is empirically set to 40. The convolutional filtering thus can uncover the spatio-temporal information across different EEG nodes. Each temporal slice is encoded to higher-level representations {Ui ∈ Rwc|Ui = Conv(Qi),i∈ [1,m]}. The activation function used in the convolutional operations is the Exponential Linear Unit (ELU) function. We use the valid padding option. Thus the output of the CNN layer has the height of 1. A maxpooling layer is then applied to reduce the number of parameters and extract important information.” Examiner information: Where this important information is understood to be intermediate spatial features. and obtaining the spatial feature based on the at least one intermediate spatial feature. Page 2574 col 1 par 4: “Following the spatio-temporal feature extraction within single EEG temporal slices, are current attention network is introduced to discover the attentive temporal dependencies across different EEG temporal slices. In traditional recurrent networks, the features that are accumulated from the previous time step are usually adopted for further analysis.” Examiner note: Where the intermediate spatial features based on a time are used to determine the spatial feature. Claim 13:The method according to claim 1, wherein the obtaining the classification result comprises: performing nonlinear processing on the spatial feature to obtain a target fitting feature; (page 2574 fig 4: “Illustration of the self-attention module. A nonlinear encoding layer first transforms the encoded EEG temporal slices and the results are scaled and normalized to get the attention weight of each temporal slice. Lastly, the attention weight is multiplied with its corresponding encoded features.” (Examiner note: As part of the spatial feature encoding, nonlinear processing is done, please note that this application does not differentiate what a “target fitting feature” is, only that it includes nonlinear processing on a spatial feature. ) … “Each slice representation hi is first non-linearly transformed into a latent space: Hi =tanh(Wihi +bi),Hi ∈ Rha whereWi ∈ Rl× ha andbi ∈ Rha are the input-to-hidden weight matrix and bias for a hidden layer of size ha. The softmax activation function, defined as softmax(xi) = 1 Zexp(xi) with Z = iexp(xi), is applied to the nonlinear latent representation Hi to obtain the weight of importance for each slice:” and performing classification processing on the target fitting feature to obtain the classification result. Page 2574 col 2 par 4: “Lastly, in the interest of computational efficiency, a weighted sum of all EEG temporal slices is computed to a slice-focused representation: A= Vihi,A∈ Rl The attentive temporal dynamic representation A is fed into a standard softmax classifier: P =softmax(WA+b), where W and b are weight and bias matrices respectively of the motor imagery classification layers. Then the cross-entropy error over all labeled samples is evaluated: L =− ˆ Yclog(Pc), c where ˆYc and Pc is the label and the classification probability of motor imagery strategy c respectively. The network weights and biases are trained with batch gradient descent. The final classification result is defined as the motor imagery strategy with max classification probability. Claim 14:Zhang_2020 teaches A method for classification processing of an electrophysiological signal, the method comprising: (page 2570 col 1 par 2: Motor imagery classification is the basic to a BCI, which supports motor rehabilitation of post-stroke patients [1]. The EEG signals, (Examiner note: an electrophysiological signal) which are captured from a human’s scalp and thus reflect the electrical activities of human the cortex, is one of the most active physiological cues to build a BCI system. … page 2572 col 1 par 3: “The beep and cue are used to notice and indicate the subject to perform the motor imagery task. The duration of motor imagery is of research interest. Formally, the duration of interest is T-second long. Each of the n EEG nodes has a sensor recording sequence ri∈[1,n] = [si 1,si 2,...,si k] ∈ Rk through k = T × f time points, where f is the sampling frequency and si t is the measurement of the ith EEG sensor at the time point t. Thus the raw EEG features of the trial T is a two-dimensional(2D) tensor XT =[r1;r2;...;rn] ∈ Rn×k with one dimension representing EEG node and the other representing time series. Our goal is to make motor imagery classification of the EEG trials XT.” acquiring a training electrophysiological signal collected by an acquisition device and a training label corresponding to the training electrophysiological signal; page 2575 col 1 par 5: “2) BCICIV2a Dataset: The BCICIV2a dataset contains EEG signals of 22 nodes recorded with nine healthy subjects and two sessions on two different days. Each session consists of 288 four-second trials of motor imagery per subject (imagining the movement of the left hand, the right hand, the feet, and the tongue). The signals were sampled with 250 Hz and bandpass-filtered between 0.5 Hz and 100 Hz by the dataset provider before release. The original dataset uses the 288 trials of the first session as training and the 288 trials of the second session as a test. However, in the subject-independent scenario, the original dataset needs to be re-split by subject with the leave-one-subject-out manner. Consequently, nine evaluation datasets (A01-A09) are achieved, each of which has 576 trials (288 trials × 2 sessions) of one subject as a test and 4608 trials (288 trials × 2 sessions × 8 subjects) of the remaining eight subjects as training.” inputting the training electrophysiological signal into an initial electrophysiological signal classification model corresponding to the acquisition device (page 2575 col 1 par 5: “. The original dataset uses the 288 trials of the first session as training”), the initial electrophysiological signal classification model comprising a channel association feature corresponding to the acquisition device, the channel association feature indicating spatial locations of multiple acquisition channels of the acquisition device; (page 2576 col 1 par 2: “Compared with the pure deep learning models which do not have particular data representations, like EEGNet and CTCNN, our proposed graph representation embeds the spatial relationship of EEG nodes, which facilitates the following neural network to analyze EEG signals.” … “our graph scheme introduces an adjacency matrix to optimize the raw data to a more effective embedding.”) extracting a time feature corresponding to the training electrophysiological signal through the initial electrophysiological signal classification model, (page 2578 col 1 par 4: “The size of the temporal slice is an important hyper-parameter. In light of the evidence that the EEG signal presents multiple time scales, such as both local and global oscillations in time [43], [54], [55], we design the temporal slices for local temporal feature extraction. Then the embedded local temporal features are input into a recurrent attention module to obtain attentive global temporal features. A large or small size of the temporal slice would degrade the model performance. We carefully tuned the size of temporal slices and reported the best results” (Examiner note :this implies that the model extracts time features as temporal slices. ) and generating an embedded feature based on the channel association feature and the time feature; (page 2576 col 2 par 1: “Specifically, instead of summing up the convolutional results along the EEG node dimension, we retain the convolutional results in the EEG node dimension and average the results along the time dimension. (Examiner note: an embedded feature)). extracting a spatial feature corresponding to the embedded feature through the initial electrophysiological signal classification model, and obtaining a predicted label corresponding to the training electrophysiological signal based on the spatial feature; Therefore, a feature vector of size n is achieved after ELU activation with each element representing the extracted features of each EEG node from the CNNlayer. (Examiner note: Where the extracted features are spatial features). The feature vector is then normalized to [−1, 1] and visualized with topographic scalp plots. Fig. 5 presents 10 representative topographic scalp plots of convolutional feature maps for each evaluation datasets. As shown in Fig. 5, the CNN layer focuses on relatively small detailed brain areas, which is important for successful EEG feature extractions [12]. Furthermore, it is consistent with previous reports (Examiner note: where being consistent with previous reports implies that this agrees with the training data,) [49], [50] that the CNN layer emphasizes on the central (FC, C, and CP) and frontal (F)/pre-frontal (Fp) areas of a human brain. More specifically, some convolutional feature maps activate at the three EEG nodes C3, C4, and Cz, which are widely demonstrated holding the most distinguishable information regarding EEG-based motor imagery classification in previous studies [47], [51]. For example, as presented in Fig. 5(a) of the PhysioNet dataset, kernel # 1 focuses on Cz; kernel # 22 focuses on C4; kernel # 5, # 24, # 35 and # 37 focus on C3. For the BCICIV2a dataset presented in Fig. 5(b), kernel #8,#26and#27focusonCz;kernel#17and#27focusonC4; kernel # 6, # 22, # 27 and # 32 focus on C3. Besides, the CNN layer also learns to target other EEG nodes, which is helpful to discriminate different motor imagery tasks as well [52], [53], especially for different subjects and paradigms. Therefore, the spatio-temporal encoding layer is able to act as a spatial filter to extract features of the most distinguishable EEG nodes. and adjusting a model parameter of the initial electrophysiological signal classification model based on a difference between the training label and the predicted label, until a convergence condition is met, to obtain a trained electrophysiological signal classification model. Page 2575 col 2 par 1: “We make use of the TensorFlow framework for a GPU-based implementation using matrix multiplications. The stochastic gradient descent with Adam update rule is used to minimize the cross-entropy loss function. The network parameters are optimized with a learning rate of 10−5. Dropout regularization is applied after the CNN layer and the recurrent network layer with the dropout probability of 0.5. The hidden state size of theLSTMcelllis64.The non-linear transformation size of the self-attention is 512. The proposed model has 16 hyper-parameters and 420,356 trainable parameters” Examiner note: See also the abstract on page 2570, which implies that it is known in the art that an EEG model is trained. And that this model is a trained model, “Thus the research of directly extending a pre-trained model to new users is particularly desired and indispensable.” Claim 15: The limitations of claim 15 are substantially the same as those of claim 1 and are therefore rejected due to the same reasons as outlined above for claim 1. Additionally, Zhang_2020 teaches the additional limitations of “An apparatus for classification processing of an electrophysiological signal, comprising: processing circuitry configured to acquire an electrophysiological signal collected by an acquisition device;” (par 2575 col 1 par 1: “We make use of the TensorFlow framework for a GPU-based implementation using matrix multiplications. The stochastic gradient descent with Adam update rule is used to minimize the cross-entropy loss function.”) Examiner note: Where a GPU based implementation implies processing circuitry, which is performing the function of the experiment as outlined by Zhang_2020. Claim 16: The limitations of claim 16 are substantially the same as those of claim 2 except that it depends from claim 15. Therefore this claim is rejected due to the same reasons as outlined above for claims 2 and 15. Claim 17: The limitations of claim 17 are substantially the same as those of claim 3 except that it depends from claim 15. Therefore this claim is rejected due to the same reasons as outlined above for claims 3 and 15. Claim 18: The limitations of claim 18 are substantially the same as those of claim 4 except that it depends from claim 15. Therefore this claim is rejected due to the same reasons as outlined above for claims 4 and 15. Claim 20: The limitations of claim 20 are substantially the same as those of claim 6 except that it depends from claim 18. Therefore this claim is rejected due to the same reasons as outlined above for claims 6 and 18. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang_2020 and further in view of “Multivariate Time Series Classification with Hierarchical Variational Graph Pooling” (Xu_2020) Claim 9: Zhang_2020 makes obvious The method according to claim 1, wherein the electrophysiological signal comprises electrophysiological signals corresponding to each of the multiple acquisition channels respectively, (page 2572 col 1 par 4: “Each of the n EEG nodes has a sensor recording sequence ri∈[1,n] = [si 1,si 2,...,si k] ∈ Rk through k = T × f time points, where f is the sampling frequency and si t is the measurement of the ith EEG sensor at the time point t.”) Zhang_2020 does not explicitly recite and the extracting the time feature comprises: acquiring at least one convolution kernel temporal; separately extracting time sub-features of each of the electrophysiological signals based on a same temporal convolution kernel to obtain multiple time sub-features corresponding to each of the at least one temporal convolution kernel; generating an intermediate time feature based on the multiple time sub-features corresponding to the same temporal convolution kernel to obtain at least one intermediate time feature corresponding to the at least one temporal convolution kernel respectively; and obtaining the time feature based on the at least one intermediate time feature. Xu_2020, however, makes obvious and the extracting the time feature comprises: acquiring at least one convolution kernel temporal; (page 3 col 2 par 5: “Therefore, it is reasonable and necessary to extract the features of the time series in units of multiple specific periods. To simulate this situation, we use multiple CNN filters with different receptive fields, namely kernel sizes, to extract features at multiple time scales.” separately extracting time sub-features of each of the electrophysiological signals based on a same temporal convolution kernel to obtain multiple time sub-features corresponding to each of the at least one temporal convolution kernel; (page 3 col 2 par 5: “Therefore, it is reasonable and necessary to extract the features of the time series in units of multiple specific periods. To simulate this situation, we use multiple CNN filters with different receptive fields, namely kernel sizes, to extract features at multiple time scales. For the i-th CNN filters, given the input time se1ies X, the feature vector h; are extracted as follows h;. = cr(1Y, * X +b), where * denotes the convolution operation, er is a nonlinear activation function, such as RELU (:r) = max(O, x),W; represents the i.-th CNN kernel and bis the bias.” generating an intermediate time feature based on the multiple time sub-features corresponding to the same temporal convolution kernel to obtain at least one intermediate time feature corresponding to the at least one temporal convolution kernel respectively; (page 3 col 2 par 5: “Therefore, it is reasonable and necessary to extract the features of the time series in units of multiple specific periods. To simulate this situation, we use multiple CNN filters with different receptive fields, namely kernel sizes, to extract features at multiple time scales. For the i-th CNN filters, given the input time se1ies X, the feature vector h; are extracted as follows h;. = cr(1Y, * X +b), where * denotes the convolution operation, er is a nonlinear activation function, such as RELU (:r) = max(O, x),W; represents the i.-th CNN kernel and bis the bias.” and obtaining the time feature based on the at least one intermediate time feature. (page 3 col 2 par 6: “And the final feature vector can be expressed ash= [h 1 , h2 , ... , h1J where p is the CNN filters number and [*] means concatenate operation. In this way, features under different period are extracted, which provides effective information for time series classification.” Examiner note: the mathematic formulas are hard to see, please refer to the prior art source for more clarity. Where the examiner understands that extracting a feature vector is the generation of an intermediate time feature, where the final feature vector is a concatenation of the intermediate time features. Zhang_2020 and Xu_2020 are analogous art to the claimed invention because they are from the same field of endeavor called EEG electrode signal classification. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Zhang_2020 and Xu_2020. The rationale for doing so would have been the use of a known technique to improve a similar device in the same way. Zhang_2020 teaches the use of a CNN window to extract time feature temporal slices page 2573 fig. 2. “Overview of the graph Convolutional Recurrent Attention Model (G-CRAM) on EEG motor imagery classification. We first represent the raw EEG measurement by a spatial graph drawn from EEG node positions; then we apply a sliding window technique to crop continuous EEG sequences into temporal slices and utilize a CNN layer to extract spatio-temporal features of each slice;” See also page 2574 section “spatio-Temporal Encoding.” Zhang_2020 does not explicitly mention that this is done only on the time dimension and separately to the embedded layer. In the context of Zhang_2020, the kernels act on the embedded layer which contains both the time and space dimensions. See page 2574 Table 1 which explicitly mentions the use of Kernels in the layers. However, Xu_2020 implies that a kernel is a way to perform the function as outlined by Zhang_2020 separately for the temporal layer. Xu_2020 states page 3 col 2 par 5: “Therefore, it is reasonable and necessary to extract the features of the time series in units of multiple specific periods. To simulate this situation, we use multiple CNN filters with different receptive fields, namely kernel sizes, to extract features at multiple time scales.” Therefore, using temporal kernel exclusively is a known technique to applying CNN filters for temporal fields. Zhang_2020 outlines a base device that uses CNN filters on an embedded layer to classify the temporal aspects of a signal. Xu_2020 is a comparable device, which specifically outlines that the filters are temporal Kernels only. One ordinarily skilled in the art would appreciate and understand that the CNN windows of Zhang_2020 are also kernels that apply temporally, and function in a similar way to the claimed invention, with the only difference being the ordering where Zhang_2020 embeds the time features first before applying the kernel. Therefore, it would have been obvious to combine the CNN windows of Zhang_2020 with the use of a temporal kernel of Xu_2020 for the benefit of extracting the temporal features of the EEG signal to obtain the invention as specified in the claims. Potentially Allowable Subject Matter Claim 5:Claim 5 is objected to as being dependent upon a rejected base claim, but may potentially be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, as well as overcoming all 112 and 101 rejections in a way that also overcomes the prior art. Claim 19:Claim 19 is objected to as being dependent upon a rejected base claim, but may potentially be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, as well as overcoming all 112 and 101 rejections in a way that also overcomes the prior art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Graph Theory at the Service of Electroencephalograms” by Nantie D. Iakovidou discusses the use of graph theory to find connections between channels in EEG signals for signal processing. The article outlines the nodes as being fully connected in a weighted graph, as well as the use of adjacency matrixes and analyzing the signals as a time series, but does not discuss the use of triangles to determine association channels. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMAD HUSSAM SHALABY whose telephone number is (571)272-7414. The examiner can normally be reached Mon-Fri 7:30am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at 5712723652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.H.S./Examiner, Art Unit 2187 /EMERSON C PUENTE/Supervisory Patent Examiner, Art Unit 2187
Read full office action

Prosecution Timeline

Nov 22, 2022
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §102, §103
Mar 04, 2026
Interview Requested
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 12, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month