DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The judicial exception being an abstract idea. The claim(s) recite(s) gathering data from user playing a game, analyzing data using mathematical model and receiving results from the analysis and evaluating the results similar Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014) which was directed to ‘‘process of organizing information through mathematical correlations’’ which is found to be directed to abstract ideas. This judicial exception is not integrated into a practical application because when the claims are considered as a whole, there is no element or combination of elements in the claims that are sufficient to ensure that the claims amount to significantly more that the abstract idea itself. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims fail to recite any improvements to another technology or technical field, improvements to the functioning of the user terminal and server itself, and/or meaningful limitation beyond generally link the use of an abstract idea to a particular environment (i.e. there is not structural relationship between the abstract idea of gathering data from user playing a game, analyzing data using mathematical model and receiving results from the analysis and evaluating the results similar). Using a Machine-learning models are seen as mathematical models that don’t improve to the computer terminal functionality itself. The use of a terminal and a server that are merely generic. Therefore, because there is no meaningful limitations in the claim to transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claim is rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 5, 9, 10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Roots et al. (US 20140057244).
Regarding claim 1, Roots discloses extracting first game data for cognitive evaluation from a user's response input information to a cognitive game application (section 0026, Session 1 is used to measure the subject's initial ability and focus level); creating a customized cognitive task performance model corresponding to the user by applying the extracted first game data to a user-customized cognitive model based on artificial intelligence learning that a cognitive architecture-based cognitive model has been pre- associatively trained in response to game data for each cognitive task (section 0035-0036, In order to isolate different stages of learning, several data aggregations were for specific sessions rather than the overall. Sessions 2 through 7 measured initial learning while including less distraction. Sessions 8 through 11 defined the variable to be considering only the low spatial distraction sections, as well as the late-intermediate portion of their learning. The final stage of the learning curve included sessions 12 through 15, when the most distraction (audio, visual, and high spatial) were being employed and subjects had the most experience with the game. The game collects data samples of 33 variables (Table 2, IDs 4-37) at 10 Hz. Over 17 sessions of 90 seconds each, this translates to 15,300 samples per participant. This is a rich and granular dataset of gameplay behavior, from which detailed diagnostic models can be built); and obtaining a substitution performance result of the cognitive task selected for each cognitive evaluation item using the customized cognitive task performance model (abstract, a tangible graphical interactive game wherein the interactive game employs a plurality of tangible graphical cubes. The game induces stimuli, measures responses and accumulates the responses using a predefined set of variables into a predefined set of metrics, wherein the variables are determined using an interactive machine learning feedback algorithm) and evaluating the cognitive ability of the user for each cognitive item based on the substitution performance result (section 0016, 0037, the machine learning-based clinical assessment protocol as a diagnostic aid for ADHD. the data from the game were mathematically transformed into feature variables corresponding to specific neurological correlates designated by the child/adolescent psychiatrists. After feature creation, a principal component analysis was applied to generate the most useful features).
Regarding claim 2, Roots discloses the customized cognitive task performance model includes a virtual task performance model for predicting at least one of the user's execution time, error rate, and correct answer rate corresponding to the selected cognitive task and for outputting the substitution performance result (section 0031, Comparison of the success and failure rates for different combinations of distractors and intensities allowed investigators to determine if audio or visual distractors were more problematic for the subject. This was determined by comparing the mCRR of a session containing only one type of distraction to a session with both. The intensity of distraction remained constant in each comparison, resulting in the variables instead of two. Spatial accuracy was then compared between sessions containing short distances between the cubes against those with increased distance).
Regarding claim 5, Roots discloses an accuracy verification step of further receiving additional game data different from the first game data from the user terminal from the user terminal and performing accuracy verification of the customized cognitive task performance model (Section 0028, design of variables compared across the entire game. As the goals became more refined, new sets of variables were created that were specific for each section. Ultimately, the variables for session-specific tasks were chosen. While creating the comparative rates, percentage values were found to be the most accurate and efficient for the algorithms that were being utilized. These rates were most useful for accuracy measures, especially when comparing different types of distraction by creating ratios out of various session accuracies).
Regarding claim 9, Roots discloses a game data processing unit for extracting first game data for cognitive evaluation from a user's response input information to a cognitive game application (section 0026, Session 1 is used to measure the subject's initial ability and focus level); a customized cognitive task performance model configuration unit for creating a customized cognitive task performance model corresponding to the user by applying the extracted first game data to a user-customized cognitive model based on artificial intelligence learning that a cognitive architecture-based cognitive model has been pre-associatively trained in response to game data for each cognitive task (section 0035-0036, In order to isolate different stages of learning, several data aggregations were for specific sessions rather than the overall. Sessions 2 through 7 measured initial learning while including less distraction. Sessions 8 through 11 defined the variable to be considering only the low spatial distraction sections, as well as the late-intermediate portion of their learning. The final stage of the learning curve included sessions 12 through 15, when the most distraction (audio, visual, and high spatial) were being employed and subjects had the most experience with the game. The game collects data samples of 33 variables (Table 2, IDs 4-37) at 10 Hz. Over 17 sessions of 90 seconds each, this translates to 15,300 samples per participant. This is a rich and granular dataset of gameplay behavior, from which detailed diagnostic models can be built); and a cognitive ability evaluation unit for obtaining a substitution performance result of the cognitive task selected for each cognitive evaluation item using the customized cognitive task performance model (abstract, a tangible graphical interactive game wherein the interactive game employs a plurality of tangible graphical cubes. The game induces stimuli, measures responses and accumulates the responses using a predefined set of variables into a predefined set of metrics, wherein the variables are determined using an interactive machine learning feedback algorithm) and evaluating the cognitive ability of the user for each cognitive item based on the substitution performance result (section 0016, 0037, the machine learning-based clinical assessment protocol as a diagnostic aid for ADHD. the data from the game were mathematically transformed into feature variables corresponding to specific neurological correlates designated by the child/adolescent psychiatrists. After feature creation, a principal component analysis was applied to generate the most useful features).
Regarding claim 10, Roots discloses performing a cognitive game application corresponding to a preset cognitive diagnosis task for each cognitive evaluation item (section 0026, Session 1 is used to measure the subject's initial ability and focus level); extracting first game data for cognitive evaluation from response input information for the cognitive game application (section 0035-0036, In order to isolate different stages of learning, several data aggregations were for specific sessions rather than the overall. Sessions 2 through 7 measured initial learning while including less distraction. Sessions 8 through 11 defined the variable to be considering only the low spatial distraction sections, as well as the late-intermediate portion of their learning. The final stage of the learning curve included sessions 12 through 15, when the most distraction (audio, visual, and high spatial) were being employed and subjects had the most experience with the game. The game collects data samples of 33 variables (Table 2, IDs 4-37) at 10 Hz. Over 17 sessions of 90 seconds each, this translates to 15,300 samples per participant. This is a rich and granular dataset of gameplay behavior, from which detailed diagnostic models can be built); and transmitting the first game data to a cognitive state evaluation apparatus, wherein the cognitive state evaluation apparatus is configured to create a customized cognitive task performance model corresponding to the user by applying the extracted first game data to a user- customized cognitive model based on artificial intelligence learning that a cognitive architecture- based cognitive model has been pre-associatively trained in response to game data for each cognitive task (section 0028, The creation of feature variables involved a great deal of trial and error. Initially, design of variables compared across the entire game. As the goals became more refined, new sets of variables were created that were specific for each section. Ultimately, the variables for session-specific tasks were chosen); and obtain a substitution performance result of the cognitive task selected for each cognitive evaluation item using the customized cognitive task performance model (abstract, a tangible graphical interactive game wherein the interactive game employs a plurality of tangible graphical cubes. The game induces stimuli, measures responses and accumulates the responses using a predefined set of variables into a predefined set of metrics, wherein the variables are determined using an interactive machine learning feedback algorithm) and evaluating the cognitive ability of the user for each cognitive item based on the substitution performance result (section 0016, 0037, the machine learning-based clinical assessment protocol as a diagnostic aid for ADHD. the data from the game were mathematically transformed into feature variables corresponding to specific neurological correlates designated by the child/adolescent psychiatrists. After feature creation, a principal component analysis was applied to generate the most useful features).
Claim(s) 1, 6-8, 10-13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by McDermott (US 20180286272).
Regarding claim 1, McDermott discloses extracting first game data for cognitive evaluation from a user's response input information to a cognitive game application (section 0028, the skill training module comprises (i) providing a score of the user's skill performance, and (ii) on the basis of the score, selecting a difficulty level for the skill training module. In some embodiments, the user's skills performance is quantified by said user's accuracy in correctly distinguishing their activity between various stimuli); creating a customized cognitive task performance model corresponding to the user by applying the extracted first game data to a user-customized cognitive model based on artificial intelligence learning that a cognitive architecture-based cognitive model has been pre-associatively trained in response to game data for each cognitive task (section 0037, includes on the basis of the user response to the challenge tasks, calculating a skill performance score for the user and increasing the difficulty of the challenge tasks when the skill performance score rises above a predetermined upper threshold and decreasing the difficulty of the challenge tasks when the skills performance score falls below a predetermined lower threshold while the user avatar advances towards the completion of a mission. For example, step (e) can include adjusting the difficulty of the challenge tasks based upon both the skills performance score and/or the attention state level of the user); and obtaining a substitution performance result of the cognitive task selected for each cognitive evaluation item using the customized cognitive task performance model (section 0037, the user's skills performance score is quantified by (i) the user's accuracy in correctly distinguishing between various stimuli, (ii) the user to take correct actions, and/or (iii) the user to avoid incorrect actions e.g., to avoid impulsive responses).
Regarding claim 6, McDermott discloses the cognitive ability evaluation step comprises a result data processing step of configuring the result data according to the cognitive ability evaluation into an analysis interface and outputting it to a pre-registered guardian terminal corresponding to the user (Fig. 23, section 0262, 0264, Once individual scores are generated for a session they are averaged based on the overall cognitive skill (attention-associated skill or impulse/inhibition-associated skill) and weighted according to the game level. The summary progress report can include a depiction of (i) the change in the user's global attention score for a period of training undertaken by the user over a period of days, and (ii) the change in the user's composite score for training sessions undertaken by the user over a period of days).
Regarding claim 7, McDermott discloses a cognitive enhancement track recommendation step of recommending an enhancement task corresponding to a cognitive ability item evaluated below a preset threshold based on the result data (section 0009, on the basis of the user response to the challenge tasks, calculating a skills performance score for the user and increasing the difficulty of achieving the challenge tasks when the skills performance score rises above a predetermined upper threshold and decreasing the difficulty of achieving the challenge tasks when the skills performance score falls below a predetermined lower threshold while the user avatar advances towards the completion of the training mission e.g., as rapidly as possible under control of the user).
Regarding claim 8, McDermott discloses the cognitive game application includes a plurality of game interface applications configured step by step, corresponding to a task variable model for each pre-set cognitive evaluation item, and wherein the task variable model for each cognitive evaluation item includes at least one of a working memory variable model, an inhibition variable model (section 0014, the skill training module is configured to train cognitive inhibition and the skill transfer module is configured to demonstrate retention of the skill of cognitive inhibition by the user), a divided attention variable model (section 0012, the skill training module is configured to train attention maintenance e.g., focused attention and sustained attention and the skill transfer module is configured for the user to demonstrate retention of the skill of attention maintenance e.g., focused attention and sustained attention), a flexibility variable model, a processing speed variable model, and a selective attention variable model (section 0016, In some embodiments, the method includes: (a) following completion of the mission, determining (i) a number of correctly selected challenge tasks; (ii) a number of correctly rejected challenge tasks; and (iii) a total number of challenge tasks; and (b) calculating a selective attention score from a composite of (i)-(iii)).
Regarding claim 10, McDermott discloses performing a cognitive game application corresponding to a preset cognitive diagnosis task for each cognitive evaluation item (section 0028, the skill training module comprises (i) providing a score of the user's skill performance, and (ii) on the basis of the score, selecting a difficulty level for the skill training module. In some embodiments, the user's skills performance is quantified by said user's accuracy in correctly distinguishing their activity between various stimuli); extracting first game data for cognitive evaluation from response input information for the cognitive game application (section 0037, includes on the basis of the user response to the challenge tasks, calculating a skill performance score for the user and increasing the difficulty of the challenge tasks when the skill performance score rises above a predetermined upper threshold and decreasing the difficulty of the challenge tasks when the skills performance score falls below a predetermined lower threshold while the user avatar advances towards the completion of a mission. For example, step (e) can include adjusting the difficulty of the challenge tasks based upon both the skills performance score and/or the attention state level of the user); and transmitting the first game data to a cognitive state evaluation apparatus, wherein the cognitive state evaluation apparatus is configured to create a customized cognitive task performance model corresponding to the user by applying the extracted first game data to a user- customized cognitive model based on artificial intelligence learning that a cognitive architecture- based cognitive model has been pre-associatively trained in response to game data for each cognitive task (section 0034, deriving an targeted cognitive skill score for each of the targeted cognitive skills on the basis of the attention state level and/or the user response to the challenge task, wherein the targeted cognitive skills comprise focused attention, sustained attention, cognitive inhibition, behavioral inhibition, selective attention, alternating attention, divided attention, interference control, novelty inhibition, delay of gratification, inner voice, motivational inhibition, or self-regulation; (f) for each training session, calculating a global composite score derived from a composite of each of the targeted cognitive skill scores; and (g) determining, over the period of training, a change in each attention score, each impulse/inhibition score, and each self-regulation score; and/or a change in the global composite score); and obtain a substitution performance result of the cognitive task selected for each cognitive evaluation item using the customized cognitive task performance model and evaluating the cognitive ability of the user for each cognitive item based on the substitution performance result (section 0037, the user's skills performance score is quantified by (i) the user's accuracy in correctly distinguishing between various stimuli, (ii) the user to take correct actions, and/or (iii) the user to avoid incorrect actions e.g., to avoid impulsive responses).
Regarding claim 11, McDermott discloses the cognitive game application includes a plurality of game interface applications configured in stages (section 0037, the modules are comprised of one or more levels, each level optionally being comprised of one or more missions e.g., stages), corresponding to a task variable model for each pre-set cognitive evaluation item, and wherein the task variable model for each cognitive evaluation item includes at least one of a working memory variable model, an inhibition variable model (section 0014, 0245, the skill training module is configured to train cognitive inhibition and the skill transfer module is configured to demonstrate retention of the skill of cognitive inhibition by the user), a divided attention variable model (section 0012, 0243, the skill training module is configured to train attention maintenance e.g., focused attention and sustained attention and the skill transfer module is configured for the user to demonstrate retention of the skill of attention maintenance e.g., focused attention and sustained attention), a flexibility variable model, a processing speed variable model and a selective attention variable model (section 0016, 0239, In some embodiments, the method includes: (a) following completion of the mission, determining (i) a number of correctly selected challenge tasks; (ii) a number of correctly rejected challenge tasks; and (iii) a total number of challenge tasks; and (b) calculating a selective attention score from a composite of (i)-(iii)).
Regarding claim 12, McDermott discloses the user's response input information corresponding to the working memory variable model is information for inputting a series of sequentially displayed numbers in reverse order; the user's response input information corresponding to the restraint variable model is object selection information corresponding to a stroop test query; the user's response input information corresponding to the divided attention variable model is sequential number input selection information having different colors (section 0242, Alternating challenge tasks can include target rule switches, or alternating target rules e.g., rules identifying targets as objects having a specific set of characteristics, e.g., shapes, colors, and symbols); the user's response input information corresponding to the flexible variable model is card selection information suitable for a card classification criterion suggestion word; and the user's response input information to the processing speed variable model is object selection information for selecting a figure having a different shape or a presented object (section 0242, Alternating challenge tasks can include target rule switches, or alternating target rules e.g., rules identifying targets as objects having a specific set of characteristics, e.g., shapes, colors, and symbols).
Regarding claim 13, McDermott discloses the task variable model for each cognitive evaluation item is determined according to the correct answer score and correct answer time information calculated from the user's response input information (section 0169, 0207, The initial training can be focused and sustained attention, challenging the user to move the character quickly while interacting with audio and distractors. Additional skills can be introduced in subsequent sessions as the user progresses through skill training and transfer modules. Users can be rewarded with points for exhibiting higher and/or sustained attention state levels and for their correct response to selection and rejection stimuli of varying priority. the FFM challenged the participant to move the user avatar quickly around a track while ignoring auditory and visual distractors. The second and third levels added tasks where the participants were required to jump for the correct target fruit and not jump for non-targeted fruit. Participants were rewarded with points for their correct jumps and non-jumps while points were deducted for incorrect commissions and omissions).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON ERIC C MORALES whose telephone number is (571)272-3107. The examiner can normally be reached Monday-Friday 830AM-530PM CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Hamaoui can be reached at 571-270-5625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JON ERIC C MORALES/Primary Examiner, Art Unit 3796
/J.C.M/Primary Examiner, Art Unit 3796