DETAILED ACTION
Applicant’s response filed 02/06/2025 has been fully considered. The following rejections and/or objections are either reiterated or newly applied.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/2025 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 2-3, 7, 9, 11-12, 16, 18 and 20 are cancelled by Applicant.
Claim 21 is newly added.
Claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are currently pending and are herein under examination.
Claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are rejected.
Priority
The instant application claims benefit of priority to Korean Application No. KR10-2020-0028772 filed 03/09/2020 and Korean Application No. KR10-2020-0126999 filed 09/29/2020. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. The claim to the benefit of priority for claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 is acknowledged. As such, the effective filing date of claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 is 03/09/2020.
Information Disclosure Statement
The IDS filed follows the provisions of 37 CFR 1.97 and has been considered in full. A signed copy of the list of references cited from this IDS is included with this Office Action.
Claim Objections
The objection to claim 4 is withdrawn in view of Applicant’s amendments.
Withdrawn Rejections
35 USC 112(b)
The rejection of claims 10, 13-15, 17 and 19 under 35 USC 112(b) is withdrawn in view of Applicant’s claim amendments.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Any newly recited portions herein are necessitated by claim amendment.
Step 2A, Prong 1:
In accordance with MPEP § 2106, claims found to recite statutory subject matter (Step 1: YES) are then analyzed to determine if the claims recite any concepts that equate to an abstract idea, law of nature or natural phenomena (Step 2A, Prong 1). In the instant application, claims 1, 4-6, 8 and 21 recite a method, claims 10, 13-15 and 17 recite a system, and claim 19 recites a computer program . The instant claims recite the following limitations that equate to one or more categories of judicial exception:
Claims 1 and 19 recite “fitting a first level artificial-intelligence computational model based on processing data from a human for a task, the processing data including at least one of behavioral data or a brain signal generated while the human processes the task; determining that the first level artificial-intelligence computational model is not overfitted based on a comparison result between a behavior profile of the human and a behavior profile of the first level artificial-intelligence computational model; fitting a second level artificial-intelligence computational model based on processing data of the first level artificial-intelligence computational model for the task; and determining the second level artificial-intelligence computational model as a transplant model for the human's intelligence through profiling for the first level artificial-intelligence computational model and the second level artificial-intelligence computational model, wherein the determining of the transplant model comprises: detecting a correlation between the first level artificial-intelligence computational model and the second level artificial-intelligence computational model, and determining whether to determine the second level artificial-intelligence computational model as the transplant model based on the correlation ,wherein the fitting of the first level artificial-intelligence computational model comprises: learning the first level artificial-intelligence computational model based on the processing data from the human, and detecting at least one parameter of a state-transition uncertainty or a state-space complexity from the first level artificial-intelligence computational model, wherein the fitting of the second level artificial-intelligence computational model comprises: learning the second level artificial-intelligence computational model based on the processing data of the first level artificial-intelligence computational model, and detecting at least one parameter of the state-transition uncertainty or the state-space complexity from the second level artificial-intelligence computational model, and wherein the detecting of the correlation comprises: detecting a parameter correlation by comparing the detected parameter of the first level artificial-intelligence computational model and the detected parameter of the second level artificial-intelligence computational model, detecting a profile correlation by comparing the behavior profile of the first level artificial-intelligence computational model and a behavior profile of the second level artificial-intelligence computational model; and determining the correlation based on both of the parameter correlation and the profile correlation.”
Claims 4 and 13 recite “further comprising theoretically designing at least one environmental factor, wherein the fitting of the first level artificial-intelligence computational model comprises further fitting the first level artificial-intelligence computational model from the processing data from the human based on the at least one environmental factor, and wherein the fitting of the second level artificial-intelligence computational model comprises further fitting the second level artificial-intelligence computational model from the processing data of the first level artificial-intelligence computational model based on the at least one environmental factor.”
Claims 5 and 14 recite “wherein the fitting of the first level artificial-intelligence computational model comprises learning the first level artificial- intelligence computational model based on the processing data from the human, thereby at least any one of the behavior profile or at least one parameter of the first level artificial-intelligence computational model is detected based on the at least one environmental factor by the fitting.”
Claims 6 and 15 recite “wherein the fitting the second level artificial-intelligence computational model comprises learning the second level artificial-intelligence computational model based on the processing data of the first level artificial- intelligence computational model, thereby at least any one of the behavior profile or at least one parameter of the second level artificial-intelligence computational model is detected based on the at least one environmental factor by the fitting.”
Claims 8 and 17 recite “wherein the determining whether to determine the second level artificial-intelligence computational model as the transplant model comprises determining the second level artificial-intelligence computational model as the transplant model when the correlation is greater than a present threshold value.”
Claim 10 recites “fit a first level artificial-intelligence computational model based on processing data from a human for a task, the processing data including at least one of behavioral data or a brain signal generated while the human processes the task, determine that the first level artificial-intelligence computational model is not overfitted based on a comparison result between a behavior profile of the human and a behavior profile of the first level artificial-intelligence computational model, fit a second level artificial-intelligence computational model based on processing data of the first level artificial-intelligence computational model for the task, and determine the second level artificial-intelligence computational model as a transplant model for the human's' intelligence through profiling for the first level artificial-intelligence computational model and the second level artificial-intelligence computational model, and determining whether to determine the second level artificial-intelligence computation model as the transplant model based on the correlation, to fit the first level artificial-intelligence computational model by learning the first level artificial-intelligence computational model based on the processing data from the human, and detecting at least one parameter of a state-transition uncertainty or a state-space complexity from the first level artificial-intelligence computational model , . . . fit the second level artificial-intelligence computational model by learning the second level artificial-intelligence computational model based on the processing data of the first level artificial-intelligence computational model, and detecting at least one parameter of the state-transition uncertainty or the state-space complexity from the second level artificial-intelligence computational model, and . . . detect the correlation by detecting a parameter correlation by comparing the detected parameter of the first level artificial-intelligence computational model and the detected parameter of the second level artificial-intelligence computational model, detecting a profile correlation by comparing the behavior profile of the first level artificial- intelligence computational model and a behavior profile of the second level artificial-intelligence computational model, and determining the correlation based on both of the parameter correlation and the profile correlation.”
Claim 21 recites “wherein the fitting the first level artificial-intelligence computational model based on processing the data from the human for the task includes processing data that includes the brain signal.”
Limitations reciting a mental process.
The above cited limitations in claims 1, 4-6, 8, 10, 13-15, 17 and 19 are recited at such a high level of generality that they equate to a mental process because they are similar to the concepts of collecting information, analyzing it, and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016)), which the courts have identified as concepts that can be practically performed in the human mind. Specifically, the broadest reasonable interpretation (BRI) of fitting a first level unspecified AI model on unspecified behavioral data includes fitting a logistic regression with numerical values that represent behavioral data, such as a number representing measured intensity of a force generated by a touch (pg. 5, para. 5 of specification). A human could practically fit a logistic regression model using numerical values using pen and paper. The BRI of fitting a second level unspecified AI model using the output of the first level unspecified AI model includes taking the output of a first logistic regression and using it as a variable in a second logistic regression, wherein the variables are represented by numerical values. As stated above in this paragraph, a human could practically perform the calculations of a logistic regression using pen and paper. The BRI of learning a first level unspecified AI model and learning a second level AI model based on processing data includes training a logistic regression on numerical values, especially because the processing data may be numerical values.
Furthermore, the following limitations under their BRI also equate to a mental process: determining that the first level AI model is not overfitted based on a comparison result, determining the second level AI model as a transplant model, detecting a correlation between the first and second level AI models, determining whether to determine the second level AI model as the transplant model, detecting at least one parameter of a state-transition uncertainty or state-space complexity from the first and second level AI models, detecting a parameter correlation by comparing the detected parameters, detecting a profile correlation by comparing the behavior profile of the first and second AI models, and determining the correlation based on the parameter correlation and the profile correlation. These limitations require that a human make comparisons and determinations, which are similar to the enumerated mental processes that can be performed in the human mind such as observation, evaluation, judgment and opinion (MPEP 2106.04(a)(2)).
Limitations reciting a mathematical concept.
The above recited limitations in claims 1, 4-6, 10, 13-15, 19 and 21 of fitting, learning, and profiling equate to a mathematical concept because they are similar to the concepts of organizing and manipulating information through mathematical correlations in Digitech Image Techs., LLC v Electronics for Imaging, Inc. (758 F.3d 1344, 111 U.S.P.Q.2d 1717 (Fed. Cir. 2014)), which the courts have identified as mathematical concepts. Specifically, the BRI of fitting and learning a model includes performing calculations with the model while profiling includes comparing the values of different models which can be done with a simple linear regression function.
As such, claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 recite an abstract idea (Step 2A, Prong 1: Yes).
Step 2A, Prong 2:
Claims found to recite a judicial exception under Step 2A, Prong 1 are then further analyzed to determine if the claims as a whole integrate the recited judicial exception into a practical application or not (Step 2A, Prong 2). The judicial exception is not integrated into a practical application because the claims do not recite additional elements that reflect an improvement to a computer, technology, or technical field (MPEP § 2106.04(d)(1) and 2106.5(a)), require a particular treatment or prophylaxis for a disease or medical condition (MPEP § 2106.04(d)(2)), implement the recited judicial exception with a particular machine that is integral to the claim (MPEP § 2106.05(b)), effect a transformation or reduction of a particular article to a different state or thing (MPEP § 2106.05(c)), nor provide some other meaningful limitation (MPEP § 2106.05(e)). Rather, the claims include limitations that equate to an equivalent of the words “apply it” and/or to instructions to implement an abstract idea on a computer (MPEP § 2106.05(f)) and to insignificant extra-solution activity (MPEP § 2106.05(g)). The instant claims recite the following additional elements:
Claim 1 recites “an electronic device”
Claim 10 recites “a memory; and a processor connected to the memory and configured to execute at least one instruction stored in the memory, wherein the processor is configured to: wherein the processor is configured to . . . ”
Claims 13-15 and 17 recite “wherein the processor is configured to”
Claim 19 recites “A computer program coupled to a computing device and stored in a recording medium readable by the computing device, the computer program executes:”
Claim 21 recites “receiving, at the electronic device, the brain signal generated while the human processes the task,”
Regarding the above cited limitations in claims 1, 10, 13-15, 17, 19 and 21 there are no limitations that the electronic device, memory, the processor, the computing device, or the recording medium require anything other than a generic computing system. As such, these limitations equate to mere instructions to implement the abstract idea on a generic computer that the courts have established does not render an abstract idea eligible in Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110
USPQ2d at 1984.
Regarding the above cited limitation in claim 21, this limitation equates to insignificant, extra-solution activity because it merely gathers data to perform the recited judicial exception of fitting an AI model.
As such, claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are directed to an abstract idea (Step 2A, Prong 2: No).
Step 2B:
Claims found to be directed to a judicial exception are then further evaluated to determine if the claims recite an inventive concept that provides significantly more than the judicial exception itself (Step 2B). These claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because these claims recite additional elements that equate to instructions to apply the recited exception in a generic way and/or in a generic computing environment (MPEP § 2106.05(f)) and to well-understood, routine and conventional (WURC) limitations (MPEP § 2106.05(d)). The instant claims recite the following additional elements:
Claim 1 recites “an electronic device”.
Claim 10 recites “a memory; and a processor connected to the memory and configured to execute at least one instruction stored in the memory, wherein the processor is configured to: wherein the processor is configured to . . . ”
Claims 13-15 and 17 recite “wherein the processor is configured to”
Claim 19 recites “A computer program coupled to a computing device and stored in a recording medium readable by the computing device, the computer program executes:”
Claim 21 recites “receiving, at the electronic device, the brain signal generated while the human processes the task,”
Regarding the above cited limitations in claims 1, 10, 13-15, 17, 19 and 21 of the electronic device, the memory, the processor, the computing device, and the computer program product coupled to a computing device and stored in a recording medium, these limitations equate to be instructions to implement an abstract idea on a generic computing system, which the courts have established does not provide an inventive concept in Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). Additionally, storing code on memory and in a recording medium as stated in claims 10 and 19 equate to storing information in memory, which the courts have established as a WURC function of a generic computer in Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
Regarding the above cited limitation in claim 21, this limitation equates to receiving/transmitting data over a network, which the courts have established as WURC limitation of a generic computer in buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014).
When these additional elements are considered individually and in combination, they do not provide an inventive concept because they equate to either WURC functions/components of a generic computer and/or generic computing system or instructions to implement an abstract idea on a computer. Therefore, these additional elements do not transform the claimed judicial exception into a patent-eligible application of the judicial exception and do not amount to significantly more than the judicial exception itself (Step 2B: No).
As such, claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are not patent eligible.
Response to Arguments under 35 USC 101
Applicant's arguments filed 02/06/2025 have been fully considered, but they are only partially persuasive.
Applicant argues that the claims do not recite mental processes. Specifically, a human could not fit a first level AI model based on processing data derived from a human performing a task. Similarly, a human cannot fit a second level AI model using the processing data of the first level AI model (pg. 9, last para. of Applicant’s remarks). Applicant’s arguments are not persuasive for the following reasons:
As discussed in the rejection above, there are no limitations in the claims that prevent a human from performing manual calculations on pen and paper, especially when the BRI of processing data (such as behavioral data) could be any type of number, wherein a human could practically input the numbers into an unspecified AI model (such as a logistic regression). This procedure could be performed again for the second level AI model as it includes inputting the numerical output of the first level AI model into the second level AI model to perform the calculations of a logistic regression.
Moreover, even if claim 1 did not contain a mental process, the claims would still recite a mathematical concept, as discussed in the rejection above. This is because the broadest reasonable interpretation of fitting and learning unspecified AI models includes performing the calculations of a logistic regression as well as updating the parameters of the logistic regression based on a numerical output.
Applicant argues for claim 21 that a human could not practically receive brain signals and then fit an AI model based on the brain signals (pg. 10, para. 1 of Applicant’s remarks). Applicant’s argument is persuasive because a human could not practically fit an AI model using brain signas. However, claim 21 is still not patent-eligible because its broadest reasonable interpretation includes recitation of mathematical concepts, as discussed in the rejection above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4-6, 8, 10, 13-15, 17, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Glascher et al. (“Glascher”; Neuron 66, no. 4 (2010): 585-595; previously cited as ref. U on PTO892 mailed 05/16/2024) in view of Kim et al. (“Kim”; BioRxiv (2018): 393983; previously cited as ref. U on PTO892 mailed 11/07/2024).
This rejection is maintained from the previous Office Action mailed 11/07/2024. Any newly recited portions herein are necessitated by claim amendment.
The bold and italicized text below are the limitations of the instant claims, and the italicized text serves to map the prior art onto the instant claims.
Claims 1, 10 and 19:
a memory; and a processor connected to the memory configured to execute at least on instruction stored in the memory,
wherein the processor is configured to:
A computer program coupled to a computing device and stored in a recording medium readable by the computing device, the program executes;
Glascher uses a probabilistic Markov decision task to investigate the neural signatures of reward prediction errors (RPE) and state prediction errors (SPE) associated with model-free and model-based learning (pg. 590, col. 2, para. 2).
Glascher discloses using image processing software and statistical analyses on SPM5 software (pg. 593, col. 1, para. 3), thus indicating that their method is computer implemented, wherein computers inherently contain memory and processors.
fitting/fit a first level artificial-intelligence computational model based on processing data from a human for a task, the processing data including at least one of behavioral data or a brain signal generated while the human processes the task;
Glascher shows their theoretical model in Figure 2. Glascher discloses a model-free SARSA learner computes an RPE using cached values from a previous trial to update state-action values, and a FORWARD learner that learns a model of the state space by means of a SPE, which is then used to update the state transition matrix and also produces a state-action value (Figure 2). Each models’ free parameters were fitted to the behavioral data (pg. 593, col. 2, last two para.). The behavioral data was derived from the Markov decision task performed by subjects (pg. 592, col. 2, para. 5).
Glascher states that each models’ free parameters were fitted to behavioral data (pg. 593, col. 2, last two para.), wherein the behavioral data was derived from the Markov decision task performed by subjects (pg. 592, col. 2, para. 5).
Determining/determine that the first level artificial-intelligence computational model is not overfitted based on a comparison result between a behavior profile of the human and a behavior profile of the first level artificial-intelligence computational model;
Glascher shows in Figure S2(D) a comparison of the predicted action probabilities FORWARD and SARSA models to the actual action probabilities derived from the behavioral choice data, and states that there is a close correspondence between model predictions and the actual data thus indicating that the models fit the data well (i.e., are not overfitted).
Fitting/fit a second level artificial-intelligence computational model based on processing data of the first level artificial-intelligence computational model for the task; and
Glascher states that the HYBRID learner computes a combined action value as an exponentially weighted sum of the action values for the SARSA and FORWARD learner (Figure 2). Each models’ free parameters were fitted to the behavioral data (pg. 593, col. 2, last two para.).
determining/determine the second level artificial-intelligence computational model as a transplant model for the human's intelligence through profiling for the first level artificial-intelligence computational model and the second level artificial-intelligence computational model,
Glascher states that the HYBRID learner provided significantly more accurate explanation of behavior than did the SARSA or the FORWARD learner (pg. 588, col. 1, para. 2). Figure S2 shows the expected values and estimated state transition probabilities from all models.
wherein the determining of the transplant model comprises:
wherein the processor is configured to determine the transplant model by:
detecting a correlation between the first level artificial-intelligence computational model and the second level artificial-intelligence computational model, and determining whether to determine the second level artificial-intelligence computational model as the transplant model based on the correlation,
Glascher states that the HYBRID learner provided significantly more accurate explanation of behavior than did the SARSA or the HYBRID learner (pg. 588, col. 1, para. 2). Figure S2 shows the expected values and estimated state transition probabilities from all models. Table 1 also compares the negative model likelihoods and Akaike’s Information Criterion between all the models.
wherein the fitting of the first level artificial-intelligence computational model comprises:
wherein the processor is configured to fir the first level artificial-intelligence model by:
learning the first level artificial-intelligence computational model based on the processing data from the human, and
Glascher also states that the models’ free parameters were fitted to the behavioral data (pg. 693, col. 2, last para.).
detecting at least one parameter of a state-transition uncertainty or a state-space complexity from the first level artificial-intelligence computational model,
Glascher states that the FORWARD model learns a model of the state space 𝑇 (𝑠,𝑎,𝑠′) by means of a SPE, which is then used to update the state transition matrix (Figure 2 caption), which represents transition probabilities (state-transition uncertainty) (pg. 593, col. 1, last para.).
wherein the fitting of the second level artificial-intelligence computational model comprises:
wherein the processor is configured to fir the second level artificial-intelligence model by:
learning the second level artificial-intelligence computational model based on the processing data of the first level artificial-intelligence computational model, and
Glascher states that the HYBRID learner computes a combined action value as an exponentially weighted sum of the action values for the SARSA and FORWARD learner (Figure 2).
detecting at least one parameter of the state-transition uncertainty or the state-space complexity from the second level artificial-intelligence computational model, and
Glascher discloses free parameters for the HYBRID learner, but Glascher does not detect a state-transition uncertainty or state-space complexity parameter from the HYBRID learner (pg. 593, col. 2, last para.).
Kim evaluates the role of task complexity (state-space complexity) alongside state-space uncertainty in the arbitration process between model-free (MF) and model-based (MB) reinforcement learning (RL) (abstract). Kim “hypothesized that an arbitration model which is sensitive to both the complexity of the state-space, and the degree of uncertainty in the state-space transitions would provide a better account of behavioral and fMRI data than would an arbitration model that was sensitive only to state-space uncertainty” (pg. 5, para. 2).
Kim contains a computational model of arbitration control that incorporates uncertainty and complexity. Inputs into the models were state, reward, and perceived task complexity (pg. 7, para. 2). Arbitration control selected a preferred model (i.e., MB or MF RL) by calculating a model choice probability (Pmb) (pg. 7, para. 2; Figure 2). Pmb is a function of prediction uncertainty (state-transition uncertainty) and task complexity (state-space complexity) (pg. 7, para. 2). The prediction uncertainty refers to estimation uncertainty about state-action-state transitions and rewards, wherein the prediction uncertainty is computed based on SPE and RPE (pg. 7, para. 2). The computational model functions by: “first, in response to the agent’s action on each trial, the environment provides the model with the state-action-state transition, token values, and task complexity. These observations are then used to compute the transition rates (MB → MF and MF → MB), which subsequently determines the model choice probability PMB. . . . It is noted that we use this framework to formally implement various hypotheses about the effect of uncertainty and complexity on RL. For instance, the configuration of the model that best accounts for subjects’ choice behavior would specify the way people combine MB and MF RL to tailor their behavior to account for the degree of uncertainty and complexity of the environment” (pg. 7, last para.).
wherein the detecting of the correlation comprises:
wherein the processor is configured to detect the correlation by:
detecting a parameter correlation by comparing the detected parameter of the first level artificial-intelligence computational model and the detected parameter of the second level artificial-intelligence computational model,
Glascher discloses comparing model parameters between the models in Table 1. However, Glascher only disclose determining a state-transition uncertainty and state-space complexity parameter (the detected parameter) of the FORWARD learner (first level) but not for the HYBIRD learner (second level).
Kim discovered an effect of uncertainty and complexity on weighting between MF and MB control from arbitration (detecting/detect a parameter correlation as the correlation by comparing the detected parameter of the first level model and the detected parameter of the second level model) (Figure 4A; 2-way repeated measures ANOVA; p<1e-4 for the main effect of both state-transition uncertainty and task complexity; p=0.039 for the interaction effect).
detecting a profile correlation by comparing the behavior profile of the first level artificial-intelligence computational model and a behavior profile of the second level artificial-intelligence computational model; and
Glascher shows in Figure S2(D) a comparison of the model-predicted action probabilities of the SARSA and FORWARD learners (first level) against the HYBRID learner (second level). Glascher also shows a comparison between of negative model likelihoods and Akaike’s Informative Criterion for the actual experiment and random trial sequences of the models in Table 1, wherein these values demonstrate goodness of fit for each model *pg. 593, col. 2, last para.).
determining the correlation based on both of the parameter correlation and the profile correlation.
As discussed above, when Glascher and Kim are taken together, it appears that the models in Glascher are correlated to one another based on a comparison between parameters (state-transition uncertainty or state-space complexity) upon model-predicted action probabilities (profile correlation). The combination of Glascher and Kim demonstrate how models are compared to determine which one provides better results, which is a common practice with machine learning technologies.
Prima facie case for obviousness:
An invention would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date of the invention if some motivation, teaching, or suggestion in the prior art would have led that person to combine the prior art teachings to arrive at the claimed invention.
Glascher uses a probabilistic Markov decision task to investigate the neural signatures of RPEs and SPEs associated with MF and MB RL (pg. 590, col. 2, para. 2), and then uses a HYBRID learner that chooses actions by forming a weighted average of the action valuations from the SARSA and FORWARD learners (pg. 587, col. 2, last para.). Kim discloses investigating the role of uncertainty and complexity in arbitration control by using a two-stage Markov decision process task (pg. 6, para. 1; Figure 1A), wherein state-transition uncertainty and state-space complexity were manipulated (pg. 6, para. 1; Figure 1B). Kim states that uncertainty conditions effect a change in average amount of SPE for MB RL, wherein high uncertainty conditions will elicit a large amount of SPE resulting in a decrement in MB prediction performance (pg. 6, para. 1). Whereas MF RL is less effected by the amount of state-transition uncertainty (pg. 6, para. 1). Kim then interrogates the effect of uncertainty and complexity on weighting between MB and MF control (pg. 10, para. 2). Kim shows in Figure 4A a 2-way repeated measures ANOVA; p<1e-4 for the main effect of both state-transition uncertainty and task complexity; p=0.039 for the interaction effect.
Therefore, one of ordinary skill in the art would have been motivated to combine the teaching of Kim for measuring and comparing parameters of state-space complexity and state-transition uncertainty for MF and MB RL to interrogate the effect of complexity and uncertainty on weighting between MB and MF learners to the method of Glascher for weighting the MF SARSA and MB FORWARD learner to create the HYBRID learner because Kim states that complexity and uncertainty affect arbitration so it would be advantageous to measure and compare these metrics across models. This would be useful for Glascher because Glascher uses a MB RL model (FORWARD) and a MF RL model (SARSA) to form a HYBRID model. The comparison of additional parameters between FORWARD and SARSA would allow for the HYBRID model to improve its predictions, since the HYBRID learner uses weighted state-action probabilities from the SARSA and FORWARD learners. One of ordinary skill in the art would have had a reasonable expectation of success by combining Kim to Glascher because Glascher uses MF and MF RL models just as Kim. The combination would have also had a reasonable expectation of success because Kim determines the effect of state-space complexity on exploration defined in equation 5 (pg. 22) and deployed it in both the MB and MF models to compare prediction performance between models (Figure 3B; pg. 22, no. 3). This method from Kim could have been integrated with Glascher because Glascher deploys the same equation (pg. 593, col. 2), which was used for each model to assume that participants’ selection actions are stochastic according to probabilities determined by their state-action values (pg. 593, col. 2).
The instant invention is therefore prima facie obvious.
Dependent claims:
Regarding claims 4 and 13, Glascher states that the SARSA learner computes a RPE (pg. 593, col. 1, para. 5), the FORWARD learner computes a SPE (pg. 593, col. 2, para. 1). Because the HYBRID learner uses the predictions of both the SARSA and FORWARD learners, it therefore has the RPE and SPE components stored within the weights. Glascher also discloses fitting each model to the behavioral data from the subjects (pg. 593, col. 2, last para.). Goodness of fit was compared between each model (pg. 593, col. 2, last para.).
Regarding claims 5 and 14, Glascher states that the SARSA learner estimates a state-action value for each state and action (behavior profile) (pg. 593, col. 1, para. 5), and the FORWARD learner also estimates a state-action value (behavior profile) (pg. 593, col. 1, last para.). These estimates are derived by computing RPE and SPE in SARSA and FORWARD respectively (pg. 593, col. 1, para. 5 – col. 1, para. 2), both of which were fitted by the behavioral data (pg. 593, col. 2, last para.). Glascher also states that the models’ free parameters were fitted to the behavioral data (pg. 693, col. 2, last para.).
Regarding claims 6 and 15, Glascher states that the state-action values from both the SARSA and FORWARD learners are combined into a weighted average and are used by the HYBRID learner to determine its own state action valuations (behavior profile) (pg. 593, col. 2, para. 2).
Regarding claims 8 and 17, Glascher shows in Table 1 that the negative model likelihoods and Akaike’s Information Criterion were better in the HYBRID learner than in both of the SARSA and FORWARD learner. Glascher then uses this determination to state that the HYBRID learner provides a significantly more accurate explanation of the behavior than both the SARSA and FORWARD learner. Therefore, the threshold value is merely the value at which the HYBRID learner exceeds performance when compared to SARSA and FORWARD.
Regarding claim 21, Glascher discloses receiving function imaging from a 3T Siemens Trio scanner and processing the imaging data using SPM5 software (pg. 593, col. 1, para. 2-3). Glascher discloses using an RPE and SPE as parametric modulators at the second decision state and the final outcome state in the single-subject analysis. The beta images were included in a repeated-measures ANOVA at the second level testing for the effect of each error signal across the group (pg. 588, col. 2, para. 4-5).
Response to Arguments under 35 USC 103
Applicant's arguments filed 02/06/2025 have been fully considered but they are not persuasive.
Applicant argues that neither Glascher nor Kim disclose determining a correlation between the first and second level artificial-intelligence computational model using both a profile correlation and a parameter correlation (pg. 10, para. 3 of Applicant’s remarks). Applicant’s arguments are not persuasive for the following reasons:
The broadest reasonable interpretation of determining a correlation by using two separate metrics includes using two separate metrics to compare rather than combining the two separate metrics to derive a third metric (i.e., correlation). Glascher does compare the first level models (SARSA and FORWARD) to the second level model (HYBRID). Glascher compares the model-predicted action probabilities (profile correlation) in Figure S2(D) and further evaluates prediction accuracy between the models in the section entitled Evaluating Behavioral Model Fit (pg. 558, col. 1, para. 2). Glascher also compares the model parameters between the models in Table 1 and also calculates/compares negative model likelihoods and Akaike’s Information Criteria metrics between models, which take into account the free parameters of each model (pg. 593, col. 2, last para.).
Furthermore, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). It is the combination of Glascher and Kim that disclose the newly presented limitations, as discussed in the rejection above.
Conclusion
No claims are allowed.
Notable prior art includes: Frank et al. (Journal of Neuroscience 35, no. 2 (2015): 485-494; newly cited), Kang et al. (US 2019/0147063 A1; published may 19, 2019), Clark et al. (US 9,280,745 B1; published March 8, 2016), and Lee et al. (Neuron 81, no. 3 (2014): 687-699; newly cited).
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah A. Auger whose telephone number is (703)756-4518. The examiner can normally be reached M-F 7:30-4:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached on (571) 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.A.A./Examiner, Art Unit 1687
/Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687