Prosecution Insights
Last updated: April 19, 2026
Application No. 17/563,700

ADAPTIVE TRAINING METHOD OF A BRAIN COMPUTER INTERFACE USING A PHYSICAL MENTAL STATE DETECTION

Non-Final OA §103§112
Filed
Dec 28, 2021
Examiner
SAX, STEVEN PAUL
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVES
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
320 granted / 460 resolved
+14.6% vs TC avg
Strong +45% interview lift
Without
With
+44.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
480
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
62.5%
+22.5% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§103 §112
Detailed Action Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. The Preliminary Amendment filed 3/14/22 has been entered. Claims 1-12 are pending. Claim Rejections - 35 USC § 112 3. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “…decoding a satisfaction/error mental state of the subject… the mental state being representative of a conformity of the trajectory with the neural command; generating training data from satisfaction/error decoded…” but it is not clear if a distinction is being made between a satisfaction/error mental state, just a mental state, and just a satisfaction/error. For example, if simply the mental state is representative of a conformity of the trajectory with the neural command, then it appears satisfaction/error mental state is the option of either “satisfaction mental state” or “error mental state” and this is reinforced by the recitation in claim 2. But if this is the case then what exactly is just “satisfaction/error decoded” and how would this be different than the mental state value of either satisfaction or error? Thus it is not clear what exactly is being decoded and what is being used to generate the training data, and the claim is vague and indefinite. Dependent claims 2-12 do not remedy the issue and are thus rejected as well. For purposes of examination, “satisfaction/error decoded” will be interpreted to mean the satisfaction/error mental state decoded, meaning decoding the mental state to be either satisfaction mental state or error mental state. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claim(s) 1-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hewage et al “Hewage” (WO 2019092456 A1) and Pilly et al “Pilly” (US 20210240265 A1). (Please see the attached copy of Hewage that numbers paragraphs in the same format as that used in this Action). 7. Regarding claim 1, Hewage shows a method for training a brain computer interface configured to receive a plurality of electrophysiological signals expressing a neural command of a subject (para 289-290, 510 show receiving neurological electrophysiological signals from a neural interface representing a neural command) during a plurality of observation windows associated with observation instants (para 341, 422, 511, 522 show the neurological signal data may be received over a plurality of observation windows associated with different sampling/observation timestamps); the brain computer interface using a predictive model to deduce at each observation instant command data from the observation data (para 273-274, 290, 510 show the neurological signals are used to deduce classified, labeled command data to control a body part or prosthetic, para 348-349, 356 show that labeling and classifying this command/control data is performed via a predictive model, para 355, 357-359 show the predictive model specifically performs the labeling and classifying at each sampling time interval), the command data being configured to control at least one effector to perform a trajectory (para 9, 290, and especially 510 show the neural command data used to control a bodily variable or prosthetic to move in a calibrated way mapped according to patterns of neurological signals), the training method comprising: at each observation instant, decoding a satisfaction/error mental state of the subject from the observation data using a mental state decoder trained beforehand, the mental state being representative of a conformity of the trajectory with the neural command (para 285-287 show an example of receiving the bodily variable trajectory response data, para 305, 310, 355-359 show the predictive model already trained on a bodily variable training dataset that specifically performs the labeling and classifying of the performance based on the response data at each sampling time interval, para 431, 441, 452 show decoding and labeling the error of the bodily variable to the neurological data, and then para 569-570 show using the decoded neurological data at each timestamp to determine a neural intent/command and compare with the bodily variable data to determine the error/performance state accordingly. Note para 381, 540, 546, 559 show that when the performance data is determined to match within an allowable error threshold, the state becomes a reward state); generating training data are generated from the satisfaction/error (mental state) decoded at a given observation instant, and from a pair formed by the observation data and the command data at a preceding observation instant (para 423, 426, 431, 569 show the training data is generated from the reward (satisfaction)/error performance mental state as derived above decoded at a given timestamp and from comparing the pair of bodily variable response data and neural command data as described above, which per para 430-431 show is the previously labeled data and thus from the preceding timestamp); and updating parameters of the predictive model by minimizing a cost function on the generated training data (para 493, 500-501 show minimizing cost function on the generated training dataset to update parameters used in the predictive model). Hewage shows the electrophysiological signals being preprocessed in a preprocessing module to form at each observation instant an observation data matrix (para 449, 452 show preparing the neurological signal data the matrix for each time step), and Hewage shows storing the observation and command data (para 510, 526, 568, 570 show storing the neurological data and the labeled data representing the commands and para 568, 570 show that this is also used as training data) but Hewage does not explicitly mention forming the (observation or command) data tensor per se. Pilly however does form a data tensor for neurological data (Figure 7, para 42 show preprocessing the EEG and other neurological physiological signal data to form a tensor). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to form the data into a tensor as is done in Pilly, in the neural interface method of Hewage, and thus form the observation data and command data into the observation data tensor and command data tensor, because it would provide an efficient way to store and process the neurological data in a system that utilizes mathematical data arrays such as matrices. 8. Regarding claim 2, in addition to that mentioned for claim 1, Hewage shows training the mental state decoder in a previous phase by presenting simultaneously to the subject a movement setpoint and a trajectory (para 285-287 show bodily variable data for the training of the mental state decoder predictive model is acquired by directing the subject movement of the body part along a path. This would have a setpoint presented with the path/trajectory to the subject concurrently so that it can be monitored and recorded accordingly), the observation data tensor (obviousness to have the data forming a tensor is shown in Pilly as explained for claim 1) being labelled with a satisfaction mental state when the trajectory is in accordance with the setpoint and with an error mental state when it deviates therefrom (para 381, 540, 546, 559 show that when the performance data is determined to match within an allowable error threshold, the state becomes a reward state, and if not is a type of error state). 9. Regarding claim 3, in addition to that mentioned for claim 2, the mental state decoder provides at each observation instant a prediction of the mental state in a form of a binary value (YtD mental_state) as well as an estimation of a degree of certainty of the prediction (Yt mental_state) (Hewage para 425, 480, 569 for example show the timestamped data, para 485, 500-501, 551 show for each of the data, predicting the mental state with an associated estimate or degree of certainty. This is used later such as in para 551, 564 to determine selection of a particular machine learning technique based on prediction uncertainty [thus certainty = 1-uncertainty] history). 10. Regarding claim 4, in addition to that mentioned for claim 3, the prediction made by the predictive model is based on a classification, the command data (tensor) being obtained from a most probable class predicted by the predictive model (Hewage para 273-274, 290, 510 show the neurological signals are used to deduce classified, labeled command data to control a body part or prosthetic. Hewage para 461 show the class label as true for the class predicted with maximum probability by the predictive model). 11. Regarding claim 5, in addition to that mentioned for claim 4, Hewage shows if the mental state predicted at an observation instant is a satisfaction state, generating the training data only from the observation data (tensor) and from the command data (tensor) at the preceding observation instant, if the degree of certainty of the predicted mental state is greater than a first predetermined threshold value (para 381, 540, 546, 559 show that when the performance data is determined to match within an allowable error threshold, the state becomes a reward state. Para 423, 426, 431, 569 show the training data is generated from the reward [satisfaction]/error performance mental state, as described for claim 1, decoded at a given timestamp and from comparing the pair of bodily variable response data and neural command data as described above, which per para 430-431 show is the previously labeled data and thus from the preceding timestamp. Para 441 show this training dataset is generated only from data whose error estimate is below a predetermined threshold, which is equivalent to the certainty – aka the estimate of not being in error - being higher than a predetermined threshold). 12. Regarding claim 6, in addition to that mentioned for claim 4, Hewage shows if the mental state predicted at an observation instant is an error state, generating the training data only from the observation data (tensor) and from the command data (tensor) at the preceding observation instant, if the degree of certainty of the predicted mental state is greater than a second predetermined threshold value (in addition to that mentioned for claim 4, Hewage para 442, 528, 546 show that even for an error state, the procedure of generating the training dataset which is from the observation and command data (tensor) at the preceding timestamp, still may occur based the error estimate being lower than a second threshold, which is equivalent to the certainty – aka the estimate of not being in error - being higher than a second predetermined threshold). 13. Regarding claim 7, in addition to that mentioned for claim 4, if the mental state predicted at an observation instant is an error state, the training data generated comprise the observation data (tensor) at the preceding observation instant as well as a command data (tensor) obtained from a second most probable class predicted by the predictive model at the preceding observation instant (in addition to that mentioned for claim 4, Hewage para 461 show the class label for the data as true for the class predicted with maximum probability by the predictive model. Hewage para 442-444 show that for an error state this is obtained for a second maximum probability class from the preceding time stamp). 14. Regarding claim 8, in addition to that mentioned for claim 4, the cost function used for updating the parameters of the predictive model expresses a square deviation between the command data (tensor) predicted by the model and that provided by the training data, the square deviation being weighted by a degree of certainty predicted by the mental state decoder during the generation of the training data, the square deviation thus weighted being added to the training data set (para 413, 470, 493 show the cost function used for updating the predictive model uses a Gaussian process. This by definition uses the square of the deviation between the prediction and the provided [training] data, and is weighted by the certainty [derived directly from error estimate in Hewage] as a probability distribution. Hewage para 470, 493 shows then adding this generated output to the training dataset). 15. Regarding claim 9, in addition to that mentioned for claim 4, the prediction made by the predictive model is based on a linear or multilinear regression (Hewage para 413, 470, 493 shows the Gaussian process which uses a linear regression). 16. Regarding claim 10, in addition to that mentioned for claim 9, if the mental state predicted at an observation instant is an error state, the training data are not generated (Hewage para 441 show training dataset may be generated only from data whose error estimate is below a predetermined threshold, and that if the data is above the threshold it is considered an error state and not used) and that if the predicted mental state is a satisfaction state, the training data are only generated from the observation data tensor and from the command data tensor at the preceding observation instant, if the degree of certainty of the predicted mental state is greater than a first predetermined threshold value (Hewage para 423, 426, 431, 569 show the training data may be generated from the reward (satisfaction)/error performance mental state, as described for claim 1, decoded at a given timestamp and from comparing the pair of bodily variable response data and neural command data as described for claim 1, which per para 430-431 show is the previously labeled data and thus from the preceding timestamp. Para 441 show this training dataset is generated only from data whose error estimate is below a predetermined threshold, which is equivalent to the certainty – aka the estimate of not being in error - being higher than a predetermined threshold). 17. Regarding claim 11, in addition to that mentioned for claim 9, regardless of the state predicted at an observation instant, the training data are generated from the observation data (tensor) and from the command data (tensor) at the preceding observation instant, the training data then being associated with the degree of certainty of the prediction of the predicted mental state (Yt mental_state) (Hewage para 423, 426, 431, 569 show the training data may be generated from the reward (satisfaction)/error performance mental state, as described for claim 1, decoded at a given timestamp and from comparing the pair of bodily variable response data and neural command data as described for claim 1, which per para 430-431 show is the previously labeled data and thus from the preceding timestamp. Hewage para 425, 480, 569 for example show the timestamped data, and para 485, 500-501, 551 show for each of the data, predicting the mental state with an associated estimate or degree of certainty. 18. Regarding claim 12, in addition to that mentioned for claim 9, the cost function used for updating the parameters of the predictive model depends on a square deviation between the command data (tensor) predicted by the predictive model and that provided by the training data, the square deviation being weighted by a factor depending increasingly on the degree of certainty of the predicted mental state, associated with the training data (para 413, 470, 493 show the cost function used for updating the predictive model uses a Gaussian process. This by definition uses the square of the deviation between the prediction and the provided [training] data, and is weighted by the certainty [derived directly from error estimate in Hewage] as a probability distribution. Para 277 show the additive white Gaussian noise process, in which the squared deviation of the observations from the predicted mean is effectively weighted by a factor that depends on the uncertainty (and thus certainty) of the predicted state. When the noise variance is high due to uncertainty, a penalty is applied that reduces the factor. Likewise, when the degree of certainty increases, the penalty has less of an effect and the factor increases). Note that the dependency with the square deviation is increasing when the mental state predicted during the generation of the training data was a satisfaction state and decreasing when the mental state is an error signal, because per Hewage para 381, 540, 546, 559, the state becomes a reward/satisfaction state when the performance data is determined to match within an allowable error threshold. This means the certainty is higher for a reward state and therefore the weighting factor is increased and so the dependency on the square deviation term is higher. Likewise, for an error state the certainty is less and so the dependency is decreased. Conclusion 19. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: a) Yoo (US 20200363869 A1) shows a brain computer interface for controlling effectors such as prosthetic limbs. b) Fried-Oken (AU 2009204001 A1 shows a predictive model for a brain interface that uses Gaussian data techniques. c) Liu (CN 111177911 B) shows predictive model techniques which minimize a cost function to update parameters and obtain an optimal parameter to adapt to a training dataset. 20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PAUL SAX whose telephone number is (571)272-4072. The examiner can normally be reached Monday - Friday, 9:30 - 6:00 Est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed, can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN P SAX/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Dec 28, 2021
Application Filed
Feb 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602537
METHODS FOR SERVING INTERACTIVE CONTENT TO A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596343
GRAPHICAL ELEMENT SEARCH TECHNIQUE SELECTION, FUZZY LOGIC SELECTION OF ANCHORS AND TARGETS, AND/OR HIERARCHICAL GRAPHICAL ELEMENT IDENTIFICATION FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12547922
BENCHMARK-DRIVEN AUTOMATION FOR TUNING QUANTUM COMPUTERS
2y 5m to grant Granted Feb 10, 2026
Patent 12541708
TRUSTED AND DECENTRALIZED AGGREGATION FOR FEDERATED LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524691
CENTRAL CONTROLLER FOR A QUANTUM SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+44.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month