Prosecution Insights
Last updated: April 19, 2026
Application No. 18/230,448

FEATURES EXTRACTION NETWORK FOR ESTIMATING NEURAL ACTIVITY FROM ELECTRICAL RECORDINGS

Non-Final OA §101§102§103§112
Filed
Aug 04, 2023
Examiner
MENGISTU, TEWODROS E
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
California Institute Of Technology
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
77%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
62 granted / 127 resolved
-6.2% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
34 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 127 resolved cases

Office Action

§101 §102 §103 §112
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending for examination. Claims 1, 15, and 20 are independent. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “feature extraction module” in claim 1. “decoder” in claim 1. “a partial least squares (PLS) regression module” in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 12 and 20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites the limitation "the training" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 20 recites the limitation "the human subject" in line 3. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5-16, and 18-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-20 each falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding Claim 1: 2A Prong 1: (This step for extracting features is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) (This step for determining a brain state output is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A brain interface system comprising: (The system is understood to be a generic computer element - See MPEP 2106.05(f).) a set of neural signal sensors sensing neural signals from a brain; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) a feature extraction module including a plurality of feature engineering modules each coupled to the set of neural signal sensors, wherein the plurality of feature engineering modules are trained to (The feature extraction module including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) a decoder coupled to the feature extraction module, the decoder (The decoder is understood to be a generic computer element - See MPEP 2106.05(f).) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A brain interface system comprising: (The system is understood to be a generic computer element - See MPEP 2106.05(f).) a set of neural signal sensors sensing neural signals from a brain; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) a feature extraction module including a plurality of feature engineering modules each coupled to the set of neural signal sensors, wherein the plurality of feature engineering modules are trained to (The feature extraction module including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) a decoder coupled to the feature extraction module, the decoder (The decoder is understood to be a generic computer element - See MPEP 2106.05(f).) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding Claim 15 2A Prong 1: A method of deriving features from a neural signal for determining brain state signals from a human subject, the method comprising: determining features from the plurality of neural signals (This step for extracting features is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: receiving a plurality of neural signals from the human subject via a plurality of neural signal sensors; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) from a feature extraction network having a plurality of feature engineering modules, each trained to (The feature extraction network including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: receiving a plurality of neural signals from the human subject via a plurality of neural signal sensors; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) from a feature extraction network having a plurality of feature engineering modules, each trained to (The feature extraction network including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding Claim 20 2A Prong 1: determine features from the plurality of neural signals 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A non-transitory computer-readable medium having machine-readable instructions stored thereon, which when executed by a processor, cause the processor to: (The non-transitory computer-readable medium having instructions and processor is understood to be generic computer elements - See MPEP 2106.05(f).) receive a plurality of neural signals from the human subject via a plurality of neural sensors; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) from a feature extraction network having a plurality of feature engineering modules, each trained to (The feature extraction network including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) decode the features via a trained decoder to output brain state signals to an output device. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A non-transitory computer-readable medium having machine-readable instructions stored thereon, which when executed by a processor, cause the processor to: (The non-transitory computer-readable medium having instructions and processor is understood to be generic computer elements - See MPEP 2106.05(f).) receive a plurality of neural signals from the human subject via a plurality of neural sensors; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) from a feature extraction network having a plurality of feature engineering modules, each trained to (The feature extraction network including a plurality of feature engineering modules is understood to be generic computer elements - See MPEP 2106.05(f).) decode the features via a trained decoder to output brain state signals to an output device. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding Claim 2 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2: wherein the brain state output is a kinematics control, and the system further comprising an output interface providing control signals based on the kinematics output from the decoder. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) 2B: wherein the brain state output is a kinematics control, and the system further comprising an output interface providing control signals based on the kinematics output from the decoder. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) Regarding Claim 5 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: wherein the set of neural signal sensors is one of a set of implantable electrodes or wearable electrodes. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the neural signal sensors - See MPEP 2106.05(h).) Regarding Claim 6 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: wherein the brain state output is an indication of a brain disorder. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the brain state output - See MPEP 2106.05(h).) Regarding Claim 7 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: wherein each of the feature engineering modules include an upper convolutional filter coupled to the neural signal sensors and an activation function to output a feature from the neural signal sensors. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a convolutional neural network as a tool to perform the abstract idea - see MPEP 2106.05(f).) Regarding Claim 8 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2: wherein each of the feature engineering modules include a lower convolutional filter coupled to the neural signal sensors, wherein the lower convolutional filter outputs an abstract signal to a subsequent feature engineering module, and wherein the lower convolutional filter of a last feature engineering module outputs a final feature. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).) 2B: wherein each of the feature engineering modules include a lower convolutional filter coupled to the neural signal sensors, wherein the lower convolutional filter outputs an abstract signal to a subsequent feature engineering module, and wherein the lower convolutional filter of a last feature engineering module outputs a final feature. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i)))) Regarding Claims 9 and 19 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: wherein each of the plurality of feature engineering modules use identical parameters for all neural signal sensors used in a training data set for training the feature engineering modules. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the feature engineering modules - See MPEP 2106.05(h).) Regarding Claim 10 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: wherein each of the plurality of feature engineering modules include an adaptive average pooling layer coupled to the activation function to summarize a pattern of features into a single feature. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the feature engineering modules - See MPEP 2106.05(h).) Regarding Claim 11 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The system of claim 7, further comprising either a partial least squares (PLS) regression module coupled to the output of the feature extraction module or a fully-connected layer of nodes, to reduce the plurality of features to a subset of features. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a feature extraction module as a tool to perform the abstract idea - see MPEP 2106.05(f).) Regarding Claim 12 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The system of claim 7, wherein the training of the feature engineering modules includes adjusting the convolutional filters from back propagation of error between the brain state output of the decoder from a training data set and a desired brain state output. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a feature extraction module as a tool to perform the abstract idea - see MPEP 2106.05(f).) Regarding Claim 13 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The system of claim 1, wherein the decoder is one of a linear decoder, a Support Vector Regression (SVR) decoder, a Long-Short Term Recurrent Neural Network (LSTM) decoder, a Recalibrated Feedback Intention-Trained Kalman filter (ReFIT-KF) decoder, or a Preferential Subspace Identification (PSID) decoder. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the decoder - See MPEP 2106.05(h).) Regarding Claim 14 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The system of claim 1, wherein a batch normalization is applied to the inputs of a training data set for training the feature engineering modules. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying batch normalization as a tool to perform the abstract idea - see MPEP 2106.05(f).) Regarding Claim 16 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 15, further comprising decoding the features via a trained decoder to output brain state signals to an output interface. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying trained decoder as a tool to perform the abstract idea - see MPEP 2106.05(f).) Regarding Claim 18 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 15, wherein each of the feature engineering modules include an upper convolutional filter coupled to the neural signal sensors, a lower convolutional filter coupled to the neural signal sensors, and an activation function to output a feature from the neural signal sensors. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a convolutional neural network as a tool to perform the abstract idea - see MPEP 2106.05(f).) Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 5, 13, 15-16, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Angle et al. (US 2019/0246929 A1, hereinafter "Angle"). Regarding Claim 1 Angle discloses: A brain interface system ([Para 0117 and Fig 12] disclose a neural data analysis system.) comprising: a set of neural signal sensors sensing neural signals from a brain ([Para 0032-0033, 0078, 0069, Fig 1, and Fig 6] describes neural information collected using neural interface probe implanted into a brain, wherein the neural information includes waveforms and electrodes (i.e. neural signals from a brain).); a feature extraction module including a plurality of feature engineering modules each coupled to the set of neural signal sensors, wherein the plurality of feature engineering modules are trained to extract a plurality of features from the sensed neural signals ([Para 0034, 0073, Fig 1, Fig 6, and Fig 8] describes a feature extraction module 110 including a plurality of discrete event detectors 180 (i.e. feature engineering modules) configured to perform feature extraction. ([Para 0078, 0069, and Fig 6] also describes a plurality of electrodes 182 (i.e. neural signal sensors) may be electronically connected to the plurality of discrete event detectors 180 (i.e. feature engineering modules each coupled to neural signal sensors.).) ); a decoder coupled to the feature extraction module, the decoder determining a brain state output from a pattern of the plurality of features ([Para 0027, 0039-0040, 0047, 0091, and Fig 8-10] describes approximator module 130 (i.e. decoder) connected to feature extraction module. The approximator module 130 is configured to generate a set of neural code (i.e. determined brain state output) from coalesced events 106 (i.e. the plurality of features). [Para 0040] further describes the approximator may find patterns.). Regarding Claim 15 Angle discloses: A method of deriving features from a neural signal for determining brain state signals from a human subject ([Para 0117, 0047, Fig 1, and Fig 11-12]), the method comprising: receiving a plurality of neural signals from the human subject via a plurality of neural signal sensors ([Para 0024, 0032-0033, 0078, 0069, Fig 1, and Fig 6] describes neural information collected using neural interface probe implanted into a human brain, wherein the neural information includes waveforms and electrodes (i.e. neural signals from a brain).); and determining features from the plurality of neural signals from a feature extraction network having a plurality of feature engineering modules, each trained to extract a feature from the neural signals. ([Para 0031, 0034, 0069, 0073, 0078, Fig 1, Fig 6, and Fig 8] describes a feature extraction module 110 including a plurality of discrete event detectors 180 (i.e. feature engineering modules) configured to perform feature extraction.) Regarding Claim 20 Angle discloses: A non-transitory computer-readable medium having machine-readable instructions stored thereon, which when executed by a processor ([Para 0008-0012, 0117-0119, Fig 1, and Fig 11-12]), cause the processor to: receive a plurality of neural signals from the human subject via a plurality of neural sensors ([Para 0024, 0032-0033, 0078, 0069, Fig 1, and Fig 6] describes neural information collected using neural interface probe implanted into a human brain, wherein the neural information includes waveforms and electrodes (i.e. neural signals from a brain).); determine features from the plurality of neural signals from a feature extraction network having a plurality of feature engineering modules, each trained to extract a feature from the neural signal ([Para 0031, 0034, 0069, 0073, 0078, Fig 1, Fig 6, and Fig 8] describes a feature extraction module 110 including a plurality of discrete event detectors 180 (i.e. feature engineering modules) configured to perform feature extraction.); and decode the features via a trained decoder to output brain state signals to an output device ([Para 0039, 0047, 0091, and Fig 8-10] describes approximator module 130 (i.e. decoder) configured to generate a set of neural code (i.e. determined brain state output) outputted to external equipment (i.e. output device).). Regarding Claim 2 Angle discloses: The system of claim 1, wherein the brain state output is a kinematics control, and the system further comprising an output interface providing control signals based on the kinematics output from the decoder. ([Para 0039, 0047, and Fig 12] describes the neural code output used for prosthetic controller and movements.) Regarding Claim 5 Angle discloses: The system of claim 1, wherein the set of neural signal sensors is one of a set of implantable electrodes or wearable electrodes. ([Para 0032-0033, 0078, 0069, Fig 1, Fig 6 and Fig 11-12] describes neural information collected using neural interface probe implanted into a brain, wherein the neural information includes electrodes.) Regarding Claim 13 Angle discloses: The system of claim 1, wherein the decoder is one of a linear decoder, a Support Vector Regression (SVR) decoder, a Long-Short Term Recurrent Neural Network (LSTM) decoder, a Recalibrated Feedback Intention-Trained Kalman filter (ReFIT-KF) decoder, or a Preferential Subspace Identification (PSID) decoder. ([Para 0028 0103, 0111], discloses lstm and linear decoders.) Regarding Claim 16 Angle discloses: The method of claim 15, further comprising decoding the features via a trained decoder to output brain state signals to an output interface. ([Para 0039, 0047, 0091, and Fig 8-10] describes approximator module 130 (i.e. decoder) configured to generate a set of neural code (i.e. determined brain state output) outputted to external equipment (i.e. output interface).). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3-4, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Angle in view of Even Chen et al. (US 11630516 B1, hereinafter "Chen"). Regarding Claim 3 Angle discloses: The system of claim 2, Angle does not explicitly disclose: wherein the output interface is a display and wherein the control signals manipulate a cursor on a display. However, Chen discloses in the same field of endeavor: wherein the output interface is a display and wherein the control signals manipulate a cursor on a display. ([Col 5 lines 21-42 and Fig 5-9] describes moving and clicking a cursor.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of a Brain interface Controller disclosed by Chen into the method for Processing Neural Signals disclosed by Angle to manipulate a cursor on a display. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of a Brain interface Controller disclosed by Chen as all the references are in the field of neural signal processing. A person of ordinary skill of the art would have been motivated to perform the combination for being able to utilize neural signals to perform control operations. Regarding Claim 4 Angle in view of Chen discloses: The system of claim 2, further comprising a mechanical actuator coupled to the output interface, wherein the control signals manipulate the mechanical actuator. ([Col 5 lines 40-55 and Fig 5-9], Chen describes controlling the movement of a motorized wheelchair and robotic arm.) Regarding Claim 17 Angle in view of Chen discloses: The method of claim 16, wherein the brain state output is a kinematics control, and wherein the output interface provides control signals for a cursor on a display or a mechanical actuator based on the kinematics output from the decoder. ([Col 5 lines 40-55 and Fig 5-9], Chen) Claim(s) 6-9, 12, 14, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Angle in view of Adamos et al. (US 20240366140 A1, hereinafter "Adamos"). Regarding Claim 6 Angle discloses: The system of claim 1, Angle does not explicitly disclose: wherein the brain state output is an indication of a brain disorder. However, Adamos discloses in the same field of endeavor: wherein the brain state output is an indication of a brain disorder. ([Para 0058 and Fig 1] describes a disorder classification) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Learnable filters disclosed by Adamos into the method of Processing Neural Signals disclosed by Angle to output an indication of a brain disorder. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Learnable filters disclosed by Adamos as all the references are in the field of neural signal processing. A person of ordinary skill of the art would have been motivated to perform the combination for being able to identify a clinical state and/or a diagnostic. Regarding Claim 7 Angle in view of Adamos discloses: The system of claim 1, wherein each of the feature engineering modules include an upper convolutional filter coupled to the neural signal sensors and an activation function to output a feature from the neural signal sensors. ([Para 0038-0041, 0060-0061, and Fig 1-3] Adamos describes convolutional filters with higher values moved and a sigmoid activation function.) Regarding Claim 8 Angle in view of Adamos discloses: The system of claim 7, wherein each of the feature engineering modules include a lower convolutional filter coupled to the neural signal sensors, wherein the lower convolutional filter outputs an abstract signal to a subsequent feature engineering module, and wherein the lower convolutional filter of a last feature engineering module outputs a final feature. ([Para 0035-0040, 0050 0087, and Fig 1-3] Adamos describes filters with lower values.) Regarding Claim 9 Angle in view of Adamos discloses: The system of claim 8, wherein each of the plurality of feature engineering modules use identical parameters for all neural signal sensors used in a training data set for training the feature engineering modules. ([Para 0047-0055 and Fig 1-3] Adamos describe Gaussian filters with a trainable shape parameters.) Regarding Claim 12 Angle in view of Adamos discloses: The system of claim 7, wherein the training of the feature engineering modules includes adjusting the convolutional filters from back propagation of error between the brain state output of the decoder from a training data set and a desired brain state output. ([Para 0049 0098 and Fig 1-3] Adamos describe training with backpropagation.) Regarding Claim 14 Angle in view of Adamos discloses: The system of claim 1, wherein a batch normalization is applied to the inputs of a training data set for training the feature engineering modules. ([Para 0039-0040 0073-0074] Adamos describes batch normalization.) Regarding Claim 18 Angle in view of Adamos discloses: The method of claim 15, wherein each of the feature engineering modules include an upper convolutional filter coupled to the neural signal sensors, a lower convolutional filter coupled to the neural signal sensors, and an activation function to output a feature from the neural signal sensors. ([Para 0038-0041, 0060-0061, and Fig 1-3] Adamos describes convolutional filters with higher values moved and a sigmoid activation function. [Para 0035-0040, 0050 0087, and Fig 1-3] Adamos describes filters with lower values.) Regarding Claim 19 Angle in view of Adamos discloses: The method of claim 18, wherein each of the plurality of feature engineering modules use identical parameters for all neural signal sensors used in a training set for training the feature engineering modules. ([Para 0047-0055 and Fig 1-3] Adamos describe Gaussian filters with a trainable shape parameters.) Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Angle in view of Adamos and Liu et al. (US 20230075309 A1, hereinafter "Liu"). Regarding Claim 10 Angle in view of Adamos discloses: The system of claim 7, Angle in view of Adamos does not explicitly disclose: wherein each of the plurality of feature engineering modules include an adaptive average pooling layer coupled to the activation function to summarize a pattern of features into a single feature. However, Liu discloses in the same field of endeavor: wherein each of the plurality of feature engineering modules include an adaptive average pooling layer coupled to the activation function to summarize a pattern of features into a single feature. ([0198-0200 and Fig 10] describe feature maps compressed through an average pooling layer to fuse information.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Signal classification disclosed by Liu into the method of Angle in view of Adamos to adapt an average pooling layer. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Signal classification disclosed by Liu as all the references are in the field of “neural signal processing. A person of ordinary skill of the art would have been motivated to perform the combination for being able to classifying signals utilizing an average pooling layer. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Angle in view of Adamos and Kamousi et al. (US 20210085235 A1, hereinafter "Kamousi"). Regarding Claim 11 Angle in view of Adamos discloses: The system of claim 7, Angle in view of Adamos does not explicitly disclose: further comprising either a partial least squares (PLS) regression module coupled to the output of the feature extraction module or a fully-connected layer of nodes, to reduce the plurality of features to a subset of features. However, Kamousi discloses in the same field of endeavor: further comprising either a partial least squares (PLS) regression module coupled to the output of the feature extraction module or a fully-connected layer of nodes, to reduce the plurality of features to a subset of features. ([Para 0063-0064] describes partial least squares for dimensionality reduction.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of feature dimensionality reduction disclosed by Kamousi into the method of Angle in view of Adamos to perform PLS. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of feature dimensionality reduction disclosed by Kamousi as all the references are in the field of feature engineering. A person of ordinary skill of the art would have been motivated to perform the combination for being able to improve constructed sets of features. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. HASSANTABAR et al. (US 20250078998 A1) describes neural networks for mental health classification. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEWODROS E MENGISTU whose telephone number is (571)270-7714. The examiner can normally be reached Mon-Fri 9:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULLAH KAWSAR can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TEWODROS E MENGISTU/Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Aug 04, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566817
AUTOMATIC MACHINE LEARNING MODEL EVALUATION
2y 5m to grant Granted Mar 03, 2026
Patent 12482032
Selective Data Rejection for Computationally Efficient Distributed Analytics Platform
2y 5m to grant Granted Nov 25, 2025
Patent 12450465
NEURAL NETWORK SYSTEM, NEURAL NETWORK METHOD, AND PROGRAM
2y 5m to grant Granted Oct 21, 2025
Patent 12400252
ARTIFICIAL INTELLIGENCE BASED TRANSACTIONS CONTEXTUALIZATION PLATFORM
2y 5m to grant Granted Aug 26, 2025
Patent 12380369
HYPERPARAMETER TUNING IN AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) MODELS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
77%
With Interview (+28.2%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 127 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month