Prosecution Insights
Last updated: April 19, 2026
Application No. 16/950,095

UNIVERSAL COGNITIVE STATE DECODER BASED ON BRAIN SIGNAL AND METHOD AND APPARATUS FOR PREDICTING ULTRA-HIGH PERFORMANCE COMPLEX BEHAVIOR USING THE SAME

Final Rejection §112
Filed
Nov 17, 2020
Examiner
BERHANU, ETSUB D
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Korea Advanced Institute Of Science And Technology
OA Round
6 (Final)
66%
Grant Probability
Favorable
7-8
OA Rounds
3y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
516 granted / 787 resolved
-4.4% vs TC avg
Strong +24% interview lift
Without
With
+24.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
50 currently pending
Career history
837
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
29.1%
-10.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 787 resolved cases

Office Action

§112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-3, 5-13, and 15-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites a method for predicting “a behavior of a human that follows a Markov decision process”, wherein the method comprises using a high-level cognitive state decoder to provide a series of calculated values corresponding to a series of high-level cognitive states, wherein the high-level state decoder has been trained to classify a series of high-level cognitive states of a human, and using a universal cognitive state decoder that has been trained “to predict the behavior of the human that follows the Markov decision process”. While the specification provides examples of what is to be considered a high-level cognitive state, it is not clear, from either the claim or the specification, what is to be considered a “behavior of a human that follows a Markov decision process”. Further regarding the specification, it is noted that the specification contains sentences and phrases that are difficult to understand. This may be a result of translating the original Korean foreign priority document into English. As such, the specification fails to provide an adequate written description of the invention such that it would allow any person skilled in the art to make and use the invention. The originally filed specification also fails to provide support for using a high-level cognitive state decoder to provide a series of calculated values corresponding to a series of classified high level cognitive states, wherein the series of classified values are used as input to another cognitive state decoder. The specification teaches inputting a single calculated value into another cognitive state decoder. The same lack of adequate written description applies to claim 11. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 5-13, and 15-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites a method for predicting “a behavior of a human that follows a Markov decision process”, wherein the method comprises using a high-level cognitive state decoder trained to classify a series of high-level cognitive states and to provide a series of calculated values corresponding to the series of high-level cognitive states, and using a universal cognitive state decoder that has been trained “to predict the behavior of the human that follows the Markov decision process”. While the specification provides examples of what is to be considered a high-level cognitive state, it is not clear, from either the claim or the specification, what is to be considered a “behavior of a human that follows a Markov decision process”. What type of behavior is the universal cognitive state decoder trained to predict? The specification mentions inferring “a decision-making strategy”, inferring “a fundamental behavior strategy (e.g., learning strategy) of a human behavior”, and inferring “even a behavior in a corresponding goal and a behavior strategy, that is, a source for the behavior”. The specification does not make clear what it means to “infer a decision-making strategy”, “infer a fundamental behavior strategy”, or “infer a source for a behavior”, nor does it adequately describe how to make those inferences. The description of Figure 3 attempts to explain how to “estimate a behavior strategy and decision making”, but it does not make clear what exactly it means to “estimate a behavior strategy” or “estimate decision making”. What is being estimated? How is step 350 in Figure 3 performed (no details are given as to how the step is performed). What information does the output of step 350 in Figure 3 provide (what type of “human behavior pattern and behavior strategy” results are capable of being determined)? Furthermore, paragraph [0075] of the filed specification appears to equate a cognitive state to a behavior strategy. Paragraph [0096] defines a cognitive state as a state in which a behavior strategy has been determined as a learning strategy, and states that a cognitive state “is described as an example of a learning strategy”. Paragraph [00156] defines “vigilance” and “non-vigilance” as cognitive states. This further adds to the confusion regarding the claimed invention. Is the claimed cognitive state one of the cognitive states recited in paragraph [0005] of the filed specification, or is it to be interpreted as defined in paragraphs [0075] and [0096], or as in paragraph [00156]? Paragraph [00117] states that “… a human’s cognitive state is a signal present in the baseline of a behavior, and has been known as an element that produces the complexity of a behavior pattern.” This phrase is not understood and adds confusion to what is to be considered a cognitive state and what is to be considered a behavior that follows a Markov decision process. Paragraph [00127] mentions that a behavior prediction decoder may predict what decision making a user will take (e.g., which button the user will press), but it is unclear how or if the behavior prediction decoder relates to claim 1. Is the “behavior that follows a Markov decision process” that is predicted an action that will be performed by the user? Figures 13-16, and their descriptions thereof, appear to be the most relevant to the invention recited in claim 1. A reading of the descriptions does not provide clarity as to what type of high level cognitive states are determined using the high-level cognitive state decoder, and what it means to predict a behavior that follows a Markov decision process. What is an example of a prediction determined in step 1330 of Figure 13? What is an example of a prediction determined in step 1540 of Figure 15? The same indefiniteness issue is present in claim 11. Regarding claims 2, 8, and 18, the phrase “the calculated value” lacks proper antecedent basis as claims 1 and 11 were amended to recite that a series of calculated values are provided. Regarding claims 2, 3, 12, and 15, the phrase “the task-independent core cognitive state” lacks proper antecedent basis as claims 1 and 11 were amended to recite that multiple task-independent core cognitive states are represented. Regarding claims 8 and 18, the phrase “the input value” lacks proper antecedent basis as claims 1 and 11 were amended to remove the phrase “input value”. Instead, claims 1 and 11 recite “an input”, wherein the input comprises a plurality of values. Regarding claims 16 and 17, it is unclear if the “a core cognitive state” is the same as or different than the “high-level cognitive states”/“task-independent core cognitive state” recited in claim 11. Without a clear understanding of what the cognitive state decoder is used to classify and what type of prediction is provided by using the universal cognitive state decoder, a proper prior art search of the claims is unable to be performed. Claims not explicitly rejected above are rejected due to their dependence on a rejected base claim. Response to Arguments Applicant's arguments filed 27 June 2025 have been fully considered and are not persuasive. The Examiner previously noted that Figures 13-16 appear to be the most relevant to the invention recited in claim 1. The Examiner maintains the following, previously discussed arguments: Having read the descriptions of the Figures again, it remains unclear what is to be considered a “behavior that follows a Markov decision process”. Having read the entire specification, a person of ordinary skill in the art would not understand what function the high-level core cognitive function decoder performs, and what function the universal cognitive state decoder performs. Regarding the rejection of the claims under 35 U.S.C. 112(a), Applicant points to sections [0072], [0088], and [0089] as providing express support in the specification for the use of Markov decision processes as the distinguishing factor between ordinary behaviors and those predicted by the claimed invention. As an initial matter, the Examiner would like to note that it remains unclear as to what the difference between “ordinary behaviors” and “those predicted by the claimed invention” is. Section [0072] mentions a two-stage Markov decision-making task, but makes no mention of predicting a behavior of a human that follows a Markov decision process or training a decoder to perform such a prediction. Section [0088] defines a Markov decision process problem as one in which “the expectation of a reward uses a sample obtained from experiences in which an agent interacts with an environment”; it fails to describe predicting a behavior of a human that follows a Markov decision process or training a decoder to perform such a prediction. Section [0089] describes an input/output setting of a Markov decision process; it fails to describe predicting a behavior of a human that follows a Markov decision process or training a decoder to perform such a prediction. Applicant then relies on sections [0090-0095] as providing support for explaining how to represent and assess “complex behaviors”, such as reinforced learning behaviors, that follow a Markov decision process, sections [0131-0134] for providing an example of how the universal decoder can be trained and implemented, and sections [0138-0148] as providing further examples of how to train/program the universal decoder using Markov decision tasks. Sections [0090-0095] describe details of a Markov decision process, with section [0094] defining a “cognitive state” as an example of a “learning strategy”, and section [0095] stating that a human’s learning strategy is represented as reinforcement learning. These sections fail to explain how a behavior that follows a Markov decision process is predicted, or training a decoder to perform such a prediction. Section [0131] states that the universal cognitive state decoder may be used as a human behavior prediction and behavior aid system; it not only fails to detail what actual prediction is made, it also fails to describe what is meant by “a human behavior that follows a Markov decision process” and how a decoder is trained to predict a human behavior that follows a Markov decision process. Sections [0132-0133] also do not disclose what type of prediction is output by the universal cognitive state decoder. Section [0134] mentions a Markov decision-making task unit, but fails to describe predicting a human behavior that follows the Markov decision-making task or training a decoder to perform such a prediction. Section [0134] merely states that both the high-level cognitive state decoder and the universal cognitive state decoder may include a behavior strategy prediction unit and a decision-making prediction unit. Section [0135] states that the Markov decision-making task unit may design a Markov decision-making task for the extraction of a task-independent core cognitive state, which ties the Markov decision-making task unit to the high-level cognitive state decoder, not the universal cognitive state decoder. Section [0139] states that the universal cognitive state decoder may predict a complex behavior according to a reinforcement learning strategy. This section provides support for using a reinforcement learning strategy (which may include a Markov decision process algorithm) to predict a complex behavior, but it fails to provide support for predicting a behavior of a human that follows a Markov decision process or training a decoder to perform such a prediction. It also fails to adequately describe what the output of the universal cognitive state decoder is. The remainder of the sections from [0140-0148] also fail to disclose/describe predicting a behavior of a human that follows a Markov decision process, or what the actual output of the universal cognitive state decoder is. Applicant asserts that the specification as filed clearly describes a system for predicting human behavior using a decoder that has been trained using Markov decision experiments “to recognize brain waves representative of Markov decision-making as opposed to other behaviors”. The Examiner respectfully disagrees. There is no description in the specification of recognizing brain waves representative of Markov decision-making as opposed to other behaviors. There is no description in the specification differentiating behaviors predicted by the universal cognitive state decoder from “other behaviors”, or any description of what is to be considered “behaviors that follow a Markov decision process” and what is to be considered a behavior that does not follow a Markov decision process. While the specification may provide support for training a prediction decoder using a Markov decision process, it does not provide support for predicting a behavior of a human that follows a Markov decision process. Regarding the Affidavit provided with the Remarks, the declarations presented in the Affidavit are not persuasive. With respect to declaration 3, without being provided the 1998 publication in Science referred to in the declaration, the Examiner can not make a determination as to whether “a behavior of a human that follows the Markov decision process” has been a foundational concept in the field of decisions neuroscience. Furthermore, the inclusion of a hyperlink without pointing to particular elements of said hyperlink is not convincing. The hyperlink takes one to a webpage with a number of articles listed; Applicant has not pointed out which of these articles shows that “a behavior of a human that follows the Markov decision process” has been a foundational concept in the field of decisions neuroscience. With respect to declarations 4 and 5, references that discuss reinforcement learning and Markov decision processes separate from “behaviors of a human that follow a Markov decision process” do not provide support for the assertion that “a behavior of a human that follows the Markov decision process has been a foundational concept in the field of decisions neuroscience”. The Examiner acknowledges that reinforcement learning is well known in the art, that Markov decision processes are well known in the art, and that training a neural decoder/classifier using reinforcement algorithms/Markov decision process algorithms is well known in the art. What is not well known in the art is what it means to train a decoder to “predict a behavior of a human that follows a Markov decision process”. The specification does not adequately describe either (1) how a decoder is trained to specifically predict a behavior of a human that follows a Markov decision process, or (2) what the output of the universal cognitive state decoder is. Regarding both the rejection under 35 U.S.C. 112(a) and the rejections under 35 U.S.C. 112(b), as noted above, and in the previously mailed out Non-final Rejection of 09 April 2025, the specification does not make clear what the actual output of the universal cognitive state decoder is. Applicant has not responded to the 35 U.S.C. 112(b) rejection regarding what the output of the universal cognitive state decoder is, nor has Applicant provided an example of what type of behavior is predicted or what type of output is provided by the universal cognitive state decoder. It is further noted that with the exception of the “behavior of a human that follows a Markov decision process” language, Applicant has not responded to any of the indefiniteness issues discussed in the previous Office action (all of which are reiterated above in paragraph 5). Applicant argues that “… a patent search directed to ‘human behavior’ and ‘markov decision process’ reveals nearly 100 patent publications that discuss and/or model human behaviors that follow a Markov decision process. Thus, this term has a clear meaning in the art.” The Examiner respectfully disagrees. Patent applications that mention “human behavior” and “Markov decision process”/”reinforcement learning” do not necessarily describe human behaviors that follow a Markov decision process, or training a decoder to predict a behavior of a human that follows a Markov decision process. As noted above, while it is well known in the art to train a decoder/classifier to make predictions using a Markov decision process algorithm, it is not well known in the art to train a decoder/classifier to “predict a human behavior that follows a Markov decision process”. In light of the specification, drawings included, it remains unclear what the universal cognitive state decoder predicts and outputs. The metes and bounds of “a behavior of a human that follows the Markov decision process” is not made clear by either the specification or the claims themselves. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Each reference cited in the PTO-892 mailed out 27 February 2024 discusses making a prediction by inputting a brain signal into a cognitive state decoder. Kim et al. (Model-based BCI: A novel brain-computer interface… – previously cited) discloses a method and system similar to the claimed invention. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETSUB D BERHANU whose telephone number is (571)270-5410. The examiner can normally be reached Mon-Fri 9:00am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Robertson can be reached at (571) 272-5001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ETSUB D BERHANU/Primary Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Nov 17, 2020
Application Filed
Dec 16, 2022
Non-Final Rejection — §112
Mar 16, 2023
Response Filed
Jun 02, 2023
Final Rejection — §112
Jul 19, 2023
Applicant Interview (Telephonic)
Jul 19, 2023
Examiner Interview Summary
Aug 18, 2023
Response after Non-Final Action
Sep 14, 2023
Response after Non-Final Action
Oct 06, 2023
Request for Continued Examination
Oct 11, 2023
Response after Non-Final Action
Feb 22, 2024
Non-Final Rejection — §112
May 15, 2024
Response Filed
Sep 01, 2024
Final Rejection — §112
Nov 06, 2024
Response after Non-Final Action
Nov 22, 2024
Examiner Interview (Telephonic)
Nov 22, 2024
Response after Non-Final Action
Dec 02, 2024
Request for Continued Examination
Dec 03, 2024
Response after Non-Final Action
Apr 04, 2025
Non-Final Rejection — §112
Jun 27, 2025
Response after Non-Final Action
Jun 27, 2025
Response Filed
Aug 19, 2025
Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593163
METHOD AND SYSTEM FOR COLLECTING AND PROCESSING BIOELECTRICAL AND AUDIO SIGNALS
2y 5m to grant Granted Mar 31, 2026
Patent 12582357
Closed System Flexible Vascular Access Device Sensor Deployment System
2y 5m to grant Granted Mar 24, 2026
Patent 12575742
Non-Invasive Venous Waveform Analysis for Evaluating a Subject
2y 5m to grant Granted Mar 17, 2026
Patent 12569269
BENDABLE CUTTING APPARATUS FOR MYOCARDIUM AND SYSTEM WITH THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12558017
METHODS FOR MODELING NEUROLOGICAL DEVELOPMENT AND DIAGNOSING A NEUROLOGICAL IMPAIRMENT OF A PATIENT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+24.5%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 787 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month