DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/2026 has been entered.
Status of Claims
Claims 1-20 are pending of which claims 1 and 13-16 are in independent form.
Claims 1-20 are rejected under 35 U.S.C. 101.
Claims 1-20 are rejected under 35 U.S.C. 103.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s Argument:
Applicant argues, in page 6-12, of the “Remarks”, theta neither one of the prior art explicitly facilitates “operating one or more classifiers associated with different types of response or mental states of human participation”; “wherein said passive brain-computer interface is configured to detect implicit human mental engagement without requiring explicit conscious action or awareness from the human participating in said real-life context”.
Examiner’s Response:
Examiner respectfully disagrees, the combination of Sajda and Myrden clearly teaches operating one or more classifiers associated with different types of response or mental states of human participation (Myrden: classifying the features into a mental state ¶ [0024]. In some embodiments, the step of classifying the features into the mental state comprises applying a shrinkage linear discrimination analysis to the frequency spectra data of selected features for classification, and determining the mental state based on the frequency ranges having higher spectral power ¶ [0030]. Also see ¶ [0056], [0066]. Classification device 120 can build or train a classification model using this data, for example, EEG data from a single user. Classification device 120 can use the classifier to classify mental states of the user and cause a result to be sent to an entity 150 or interface application 130 ¶ [0104]. Also see ¶ [0124]-[0125]);
wherein said passive brain-computer interface is configured to detect implicit human mental engagement without requiring explicit conscious action or awareness from the human participating in said real-life context (Myrden: the mental state of the patient is monitored via passive BCI monitoring in parallel with active BCI monitoring ¶ [0015]. Also see ¶ [0047]. Passive brain-computer interfaces may provide a way to complement and stabilize these traditional systems. Embodiments described herein can provide a passive brain-computer interface that uses electroencephalography to monitor changes in mental state on a single-trial basis ¶ [0097]. Also see ¶ [0136], [0176]). Examiner further specifies that, Sajda also teaches: the disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle [0020]. Also see ¶ [0029], [0040].
Regarding the arguments presented for 35 USC 101 rejection, examiner specifies that the arguments/amendments (pages 1-5 of the remarks) fail to overcome the 35 USC 101 rejection. More specifically:
Step 2A, Prong One (Judicial Exception)
The claims explicitly recite operations that are fundamentally mental or human-centric, such as:
“Imparting … human knowledge/ intelligence/advice/subjective interpretations into an ML algorithm”;
“Processing operational and environmental data”:
“Processing human brain activity”
“Identifying human mental engagement”;
“Inferring aspects of participation”
“Controlling analysis based on the inferred data”
“Detecting implicit human mental engagement”.
At its core, the claim is directed to:
Collecting data (physical data and brain activity),
Analyzing/inferencing menta state,
Providing or imparting advice/knowledge.
These fall into recognized abstract idea:
Mental Process: human knowledge; human intelligence; subjective interpretation; human advise; mental engagement detection; inference about participation. This is merely classic automation of mental process and evaluation. Even though a BCI is used, the focus remains on interpreting human mental states and providing advice.
Mathematical/Algorithmic Processing: ML algorithm; classifiers; inferencing; processing data. ML/classifiers are treated as mathematical concept when not tied to a specific technical improvement (there are no indications of technical improvements).
Therefore, the independent claims recite: mental process; mathematical concept; information analysis and advice.
Step 2A, Prong Two (Practical Application)
The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components:
“information processing device,”
“ML algorithm,”
“passive brain-computer interface”,
“classifiers”,
“sensing data”,
real-life context.
Those appear only as generic tools performing conventional data collection and processing.
Generic Implementation:
The computing device, ML algorithm, and classifiers are described functionally and perform routine data collection and analysis.
No specific architecture, parameterization, or unconventional configuration is recited.
No Improvement to BCI Technology:
The claims do not disclose a particular signal acquisition, filtering, feature extraction, or neural decoding technique.
The passive BCI is invoked only as a tool to obtain mental state data.
Result Oriented Functional Language:
Detecting “implicit human mental engagement” is claimed at a high level without specifying how the detection is technologically archived.
The ML algorithm is recited in terms of desired results rather then a specific technical implementation.
Field-of-Use limitation:
The “real-life”, “non-virtual”, or “non-predefined” context merely describes the environment in which the abstract idea is applied.
This language does not impose a meaningful technological limitations.
There are no indications of:
A technical improvement to functioning of the ML model itself or improvements on how ML models operate;
A specific technical mechanism improving computer performance or enhances sensor performance,
Create a new technical architecture for human computer integration.
The control clause “aspects of said real-life context are controlled by said information processing device” in also functional and result oriented, not tied to any specific technical means.
Therefore, the claims merely uses generic computing components to execute an abstract idea, which is insufficient to “integrate” in into a practical application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claim(s) recite(s) context related operations in view of human brain activities collected through brain computer interface (BCI).
With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter.
Independent claim 1 and 14 is directed to a method, which is a process.
Independent claim 13 directed to a non-transitory medium, which is directed to one of the four statutory subject matters.
Independent Claim 15 and 16 is directed to a system, including a processor and a memory, which is a machine.
All other claims depend on claims 1, 13-16. As such, claims 1-20 are directed to a statutory category.
Regarding claims 1 and 13-16:
With respect to step 2A, Prong One, prong one, the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity.
The claims explicitly recite operations that are fundamentally mental or human-centric, such as:
“Imparting … human knowledge/ intelligence/advice/subjective interpretations into an ML algorithm”;
“Processing operational and environmental data”:
“Processing human brain activity”
“Identifying human mental engagement”;
“Inferring aspects of participation”
“Controlling analysis based on the inferred data”
“Detecting implicit human mental engagement”.
At its core, the claim is directed to:
Collecting data (physical data and brain activity),
Analyzing/inferencing menta state,
Providing or imparting advice/knowledge.
These fall into recognized abstract idea:
Mental Process: human knowledge; human intelligence; subjective interpretation; human advise; mental engagement detection; inference about participation. This is merely classic automation of mental process and evaluation. Even though a BCI is used, the focus remains on interpreting human mental states and providing advice.
Mathematical/Algorithmic Processing: ML algorithm; classifiers; inferencing; processing data. ML/classifiers are treated as mathematical concept when not tied to a specific technical improvement (there are no indications of technical improvements).
Therefore, the independent claims recite: mental process; mathematical concept; information analysis and advice.
With respect to step 2A, Prong Two, prong two, the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application.
The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components:
“information processing device,”
“ML algorithm,”
“passive brain-computer interface”,
“classifiers”,
“sensing data”,
real-life context.
Those appear only as generic tools performing conventional data collection and processing.
Generic Implementation:
The computing device, ML algorithm, and classifiers are described functionally and perform routine data collection and analysis.
No specific architecture, parameterization, or unconventional configuration is recited.
No Improvement to BCI Technology:
The claims do not disclose a particular signal acquisition, filtering, feature extraction, or neural decoding technique.
The passive BCI is invoked only as a tool to obtain mental state data.
Result Oriented Functional Language:
Detecting “implicit human mental engagement” is claimed at a high level without specifying how the detection is technologically archived.
The ML algorithm is recited in terms of desired results rather then a specific technical implementation.
Field-of-Use limitation:
The “real-life”, “non-virtual”, or “non-predefined” context merely describes the environment in which the abstract idea is applied.
This language does not impose a meaningful technological limitations.
There are no indications of:
A technical improvement to functioning of the ML model itself or improvements on how ML models operate;
A specific technical mechanism improving computer performance or enhances sensor performance,
Create a new technical architecture for human computer integration.
The control clause “aspects of said real-life context are controlled by said information processing device” in also functional and result oriented, not tied to any specific technical means.
Therefore, the claims merely uses generic computing components to execute an abstract idea, which is insufficient to “integrate” in into a practical application.
With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to a computer readable storage medium, computer, memory, and processor, at a very high level of generality and without imposing meaningful limitations on the scope of the claim. The additional elements: generic ML, generic classifiers, generic sensing, generic BCI, and generic processing device; which does not amount to significantly more than the abstract idea and is not enough to transform an abstract idea into eligible subject matter. Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".).
The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner.
MPEP § 2106.0S(d)(II) sets forth the following:
The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
• Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ;
• Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ;
• Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ;
• Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ;
• Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and
• A web browser's back and forward button functionality, Internet Patent
• Corp. v. Active Network, Inc. ...
. . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking).
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself.
The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea.
Regarding claims 2, 3 and 11 (Generic Data Association and Control):
Claims 2 and 11 recite associating identified operational data and human brain activity data and controlling analysis/inferencing based on association.
Claim 3 recites adapting analysis based on human mental engagement and device interactions.
These limitations merely refine the abstract data analysis and decision-making process and constitute additional mental evaluation and information processing.
There are no specific technical mechanism or improvement to computer or BCI functionality.
Regarding claims 4 and 5 (Event Characterization):
Claim 4 recites that real-life context comprises cognitive probing event.
Claim 5 recites that the brain activity data comprises implicit human participation.
These limitations merely describe the type of information being analyzed and do not provide any technological improvements.
The claim continue to focus on observing and interpreting human mental activity.
Regarding claim 6 (Generic Bio-signal Sensing):
Claim 6 recites identifying mental engagement from human bio-signal data sensed by at least one bio-signal sensor.
The sensor and sensing are recited functionally and generically.
No particular signal acquisition technique, electrode config, filtering method, or noise reduction improvement is provided.
The limitation merely gathers additional data for the abstract idea.
Regarding claims 7 and 17 (Real-Time Processing):
Claims 7 and 17 recites processing data in real-time or quasi real-time.
Processing information in real time is a well understood, routine computing function.
The claims does not recite ant specific latency reduction technique, streaming architecture, or technical performance improvement.
Regarding claim 8 (Types of operational data):
Claim 8 recites various device states and technological state data.
The limitation simply expands the categories of data being collected and analyzed.
Expanding the data field does not constitute and technological improvement.
Regarding claims 9, 18 and 19 (Learning Reinforcement):
Claim 9 recites that the ML algorithm is a reinforcement learning algorithm with a reward function based on brain activity data.
Reinforcement learning is a known mathematical concept technique.
The claim does not recite any specific improvement to the reinforcement learning architecture, reward formulation, or training efficiency.
The claim therefore remains within the abstract mathematical concept.
Regarding claims 10 and 20 (Artificial General Intelligence):
Claim 10 recites that the ML algorithm comprises an artificial general intelligence data processing algorithm.
This is a high-level aspirational label and does not impose any concrete technological limitation.
The claim is directed to a generic algorithmic process.
Regarding claim 12 (Additional Algorithmic Processing):
Claim 12 recites operating at least one further algorithm for sensing, processing, and pre-processing various data types.
These are generic data processing functions performed by conventional computing systems.
No unconventional processing pipeline or technological improvement is provided.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sajda; Paul et al. (US 20190101985 A1) [Sajda] in view of MYRDEN; Andrew et al. (US 20190212816 A1) [Myrden] in view of ALCAIDE; Ramses et al. (US 20200192478 A1) [Alcaide].
Regarding claims 1, 13, 14, and 15, Sajda discloses, a method comprises: imparting into a machine learning algorithm operated by an information processing device at least one of human knowledge, human intelligence, human subjective interpretations and human advise about tasks, processes, devices, and information perceived by a human participating in a real-life context (The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle. As utilized herein, an “environment” can include a real-world environment, a virtual reality environment, and/or an augmented reality environment. Additionally, the disclosed subject matter provides systems and methods for using a brain-artificial intelligence (AI) interface to use neurophysiological signals to help deep reinforcement learning based AI systems adapt to human expectations as well as increase task performance. The disclosed subject matter can provide access to the cognitive state of humans, such as through real-time electroencephalogram (EEG) based neuroimaging, and can be used to understand implicit non-verbal cues in real-time. Furthermore, the systems and methods disclosed can use neurophysiological signals to inform deep reinforcement learning based AI systems to enhance user comfort and/or trust in automation ¶ [0020], [0022]. In some embodiments, the subjects (e.g., human user) can be driven through a grid of streets and asked to count image objects of a pre-determined target category. As a subject is driven through a virtual environment, the subject can constantly make assessments, judgments, and decisions about the virtual objects that are encountered in the virtual environment during the drive. The subject can act immediately upon some of these assessments and/or decisions, but several of these assessments and/or decisions based on the encountered virtual objects can become mental notes or fleeting impressions (e.g., the subject's implicit labeling of the virtual environment). The disclosed system can physiologically correlate such labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of the three-dimensional virtual environment ¶ [0040]. Also see ¶ [0024], [0025], [0029]-[0032], [0064]-[0065]),
said machine learning algorithm processing operational data comprising at least one of sensed, perceived, obtained or otherwise identified physical data, and processing virtual data and environmental information originating from said real-life context (neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation [Abstract]. The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle. As utilized herein, an “environment” can include a real-world environment, a virtual reality environment, and/or an augmented reality environment ¶ [0020]. Also see ¶ [0031]-[0032], [0064]. Real-world environment has been interpreted as real-life context) and
human mental brain activity data relating to implicit human participation with said real-life context and provided by a [passive] brain-computer interface, wherein said operational data and said human mental brain activity data are identified by sensing human mental engagement with an aspect of said real-life context (The present disclosure relates to systems and methods for providing a hybrid brain-computer-interface (hBCI) that can detect an individual's reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response to objects, events and/or actions by an AI agent in an environment by generating implicit reinforcement signals for improving an AI agent controlling actions in the relevant environment, such as an autonomous vehicle. Although the disclosed subject matter is discussed within the context of an autonomous vehicle virtual reality game in the exemplary embodiments of the present disclosure, the disclosed system can be applicable to any other environment (e.g., real, virtual, and/or augmented) in which the human user's sensory input is to be used to influence actions, changes, and/or learning in the environment ¶ [0005]. Also see [0008], [0020] and [0031]-[0033]. The subject can act immediately upon some of these assessments and/or decisions, but several of these assessments and/or decisions based on the encountered virtual objects can become mental notes or fleeting impressions (e.g., the subject's implicit labeling of the virtual environment). The disclosed system can physiologically correlate such labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of the three-dimensional virtual environment ¶ [0040]), and
at least one of analysis and inferencing by said machine learning algorithm and aspects of said real-life context are controlled by said information processing device based on said processed operational data and said human mental brain activity data (The system includes a machine learning module operatively connected to the at least one sensor and configured to process sensory information for the user from the at least one sensor in response to the environment, and a controller operatively connected to the machine learning module. The machine learning module includes a processing circuit, a hybrid human brain computer interface (hBCI) module, and a reinforcement learning module ¶ [0008]. Also see ¶ [0031], [0033], [0034]).
However, Sajda does not explicitly facilitate, passive brain-computer interface; continuously monitored by the passive brain-computer interface operating at least one respective classifier; operating one or more classifiers associated with different types of response or mental states of human participation; wherein said passive brain-computer interface is configured to detect implicit human mental engagement without requiring explicit conscious action or awareness from the human participating in said real-life context.
Myrden discloses, passive brain-computer interface; continuously monitored by the passive brain-computer interface operating at least one respective classifier (Passive brain-computer interfaces may provide a way to complement and stabilize these traditional systems. Embodiments described herein can provide a passive brain-computer interface that uses electroencephalography to monitor changes in mental state on a single-trial basis. An example experiment recorded cortical activity from 15 locations while 11 able-bodied adults completed a series of challenging mental tasks. Using a feature clustering process to account for redundancy in EEG signal features, embodiments classified self-reported changes in fatigue, frustration, and attention levels with 74.8±9.1%, 71.6±5.6%, and 84.8±7.4% accuracy, respectively. Based on the most frequently-selected features across all participants, embodiments can have frontal and central electrodes for fatigue detection, posterior alpha band and frontal beta band activity for frustration detection, and posterior alpha band activity for attention detection. In some embodiments, these results can be integrated with an active brain-computer interface ¶ [0097]. FIG. 4 is a view of an example interface application 130. In some embodiments, interface application 130 includes a classification device 120. In some embodiments, interface application 130 is connected to a headset associated with or housing a BCI platform 110 and classification device 120. The headset may include multiple electrodes 52 to collect EEG data when connected to a user's scalp. In some embodiments, the headset may comprise an in-ear EEG device as described in U.S. application No. 62/615,108, titled “In-Ear EEG Device and Brain-Computer Interfaces” and filed Jan. 9, 2018, which is incorporated herein by reference. The signals may be collected by signal collection unit 134, which may connect to BCI platform 110 housed within the headset. The BCI platform 110 can create and/or use one or more classifiers as described above. For example, the BCI platform 110 within a headset 140 can train and retrain a classifier using EEG data from one or more sessions from a single user engaged with interface application 130 or headset 140. BCI platform 110 can use the classifier to classify mental states of the user using further EEG signals. BCI platform 110 may be operable as described above ¶ [0124]. Also see ¶ [0104], [0105], [0125], [0133], [0192] and [0199]);
operating one or more classifiers associated with different types of response or mental states of human participation (Myrden: classifying the features into a mental state ¶ [0024]. In some embodiments, the step of classifying the features into the mental state comprises applying a shrinkage linear discrimination analysis to the frequency spectra data of selected features for classification, and determining the mental state based on the frequency ranges having higher spectral power ¶ [0030]. Also see ¶ [0056], [0066]. Classification device 120 can build or train a classification model using this data, for example, EEG data from a single user. Classification device 120 can use the classifier to classify mental states of the user and cause a result to be sent to an entity 150 or interface application 130 ¶ [0104]. Also see ¶ [0124]-[0125]);
wherein said passive brain-computer interface is configured to detect implicit human mental engagement without requiring explicit conscious action or awareness from the human participating in said real-life context (Myrden: the mental state of the patient is monitored via passive BCI monitoring in parallel with active BCI monitoring ¶ [0015]. Also see ¶ [0047]. Passive brain-computer interfaces may provide a way to complement and stabilize these traditional systems. Embodiments described herein can provide a passive brain-computer interface that uses electroencephalography to monitor changes in mental state on a single-trial basis ¶ [0097]. Also see ¶ [0136], [0176]). Examiner further specifies that, Sajda also teaches: the disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle [0020]. Also see ¶ [0029], [0040]).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Myrden's system would have allowed Sajda to facilitate passive brain-computer interface; continuously monitored by the passive brain-computer interface operating at least one respective classifier; operating one or more classifiers associated with different types of response or mental states of human participation; wherein said passive brain-computer interface is configured to detect implicit human mental engagement without requiring explicit conscious action or awareness from the human participating in said real-life context. The motivation to combine is apparent in the Sajda reference, because there is a need to improve brain-computer interfaces data collection and utilization.
However, neither Sajda nor Myrden explicitly facilitates wherein said real-life context is a non-predefined or authentic non-virtual context or environment occurring in reality or practice; referring to what aspect or aspects of said real-life context said human participation is momentarily implicitly involved with; said real-life context; to impart human knowledge, intelligence, subjective interpretations and human advice about tasks, processes, devices, and information perceived from said real-life context.
Alcaide discloses, wherein said real-life context is a non-predefined or authentic non-virtual context or environment occurring in reality or practice; referring to what aspect or aspects of said real-life context said human participation is momentarily implicitly involved with; said real-life context (For BCI technology to be better suited for patients, useful to the general public, and employed in the control of real-world tasks, the information transfer rate has to be improved to meet a natural interactive pace, the error rate has to be reduced, and the complexity of the interaction interface has to be minimized, compared to current implementations. Additionally, BCI applications demand a high cognitive load from the users, thus the user interface has to be improved to move away from quiet laboratory environments into the real world ¶ [0027]. The integrated video based eye-tracker 102 and display 106 can be configured to view virtual reality space in the form of a user interface presented on the display 106. In some embodiments, the integrated video based eye-tracker 102 and display 106 can be configured such that the display 106 is on a semi-transparent eye-glass area, allowing the user to view augmented reality space. That is, the user can view the real-world through the semi-transparent eye-glass area that is also the integrated display 106 presenting the user with a user interface that he/she can interact with ¶ [0044]. Also see ¶ [0101], [0105], [0111]);
to impart human knowledge, intelligence, subjective interpretations and human advice about tasks, processes, devices, and information perceived from said real-life context (For BCI technology to be better suited for patients, useful to the general public, and employed in the control of real-world tasks, the information transfer rate has to be improved to meet a natural interactive pace, the error rate has to be reduced, and the complexity of the interaction interface has to be minimized, compared to current implementations. Additionally, BCI applications demand a high cognitive load from the users, thus the user interface has to be improved to move away from quiet laboratory environments into the real world. In order to configure BCI devices and applications to be easier and more intuitive, there exists a need for improved devices and techniques in the implementation of brain machine interfaces that operate with high-speed and high accuracy to enable user mediated action selection through a natural intuitive process ¶ [0027]. Also see ¶ [0044], [0101], [0105], [0111]).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Alcaide's system would have allowed Sajda and Myrden to facilitate wherein said real-life context is a non-predefined or authentic non-virtual context or environment occurring in reality or practice; referring to what aspect or aspects of said real-life context said human participation is momentarily implicitly involved with; said real-life context; to impart human knowledge, intelligence, subjective interpretations and human advice about tasks, processes, devices, and information perceived from said real-life context. The motivation to combine is apparent in the Sajda and Myrden reference, because there is a need to improve BCI system to address the need for Brain Computer Interfaces that operate with high-speed and accuracy.
Regarding claim 2, the combination of Sajda, Myrden and Alcaide discloses, wherein at least one of said analysis and inferencing by said machine learning algorithm and aspects of said real-life context are controlled by said information processing device by associating said identified operational data and human mental brain activity data (Sajda: In other example embodiments, a system for detecting reinforcement signals of a user in one or more objects, events, or actions within an environment through at least one sensor is disclosed. The system includes a machine learning module operatively connected to the at least one sensor and configured to process sensory information for the user from the at least one sensor in response to the environment, and a controller operatively connected to the machine learning module. The machine learning module includes a processing circuit, a hybrid human brain computer interface (hBCI) module, and a reinforcement learning module ¶ [0008]. neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation [Abstract]. The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle. As utilized herein, an “environment” can include a real-world environment, a virtual reality environment, and/or an augmented reality environment ¶ [0020]. Also see ¶ [0031], [0033], [0034], [0040]).
Regarding claim 3, the combination of Sajda, Myrden and Alcaide discloses, wherein at least one of said analysis and inferencing by said machine learning algorithm and aspects of said real-life context are adapted by said information processing device based on at least one of human mental engagement with respect to respective operations and interactions in said real-life context and engagement of said information processing device with respective operations and interactions in said real-life context (Sajda: neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation [Abstract]. The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle. As utilized herein, an “environment” can include a real-world environment, a virtual reality environment, and/or an augmented reality environment ¶ [0020]. Also see ¶ [0008], [0031], [0033], [0034]).
Regarding claim 4, the combination of Sajda, Myrden and Alcaide discloses, wherein said real-life context comprises a cognitive probing event (Sajda: The present disclosure relates to systems and methods for providing a hybrid brain-computer-interface (hBCI) that can detect an individual's reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response to objects, events and/or actions by an AI agent in an environment by generating implicit reinforcement signals for improving an AI agent controlling actions in the relevant environment, such as an autonomous vehicle ¶ [0005]. Also see ¶ [0020]-[0021]).
Regarding claim 5, the combination of Sajda, Myrden and Alcaide discloses, wherein said human brain activity data comprises implicit human participation of at least one individual (Sajda: These sensory input signals can be integrated and decoded to construct a hybrid brain-computer interface (hBCI) whose output represents a passenger's subjective level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like in objects and/or events in the virtual environment. The disclosed system can be a hBCI by utilizing brain-based physiological signals measured from the passenger and/or user, such as EEG, pupillometry, and gaze detection, using sensory input devices. By integrating physiological signals that can infer brain state based on a fusion of modalities (e.g., other sensory input signals) other than direct measurement of brain activity, the disclosed system can be a hybrid BCI (hBCI) ¶ [0029], [0031]. The present disclosure relates to systems and methods for providing a hybrid brain-computer-interface (hBCI) that can detect an individual's reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response to objects, events and/or actions by an AI agent in an environment by generating implicit reinforcement signals for improving an AI agent controlling actions in the relevant environment, such as an autonomous vehicle ¶ [0005], [0020]).
Regarding claim 6, the combination of Sajda, Myrden and Alcaide discloses, wherein human mental engagement with an aspect of said real-life context is identified from human bio-signal data sensed by at least one bio-signal sensor, including said passive brain-computer interface (Sajda: the sensory input devices can include, but are not limited to, a tablet computing device (e.g., iPad) or any other mobile computing device including cameras, sensors, processors, wearable electronic devices which can include biosensors (e.g., smart watch, Apple Watch, Fitbit, etc.), heart rate monitors and/or sensors, and/or an EEG. These sensors can collect different types of sensory input information from the human user and transmit that information to the machine learning module of the hBCI system illustrated in FIG. 2 ¶ [0031]).
Regarding claims 7 and 17, the combination of Sajda, Myrden and Alcaide discloses, wherein operational data, human mental brain activity data and human mental engagement data are processed in real-time or quasi real-time, in particular data pertaining to a time critical context (Sajda: collection and processing brain activity in real-time ¶ [0020]-[0023]).
Regarding claim 8, the combination of Sajda, Myrden and Alcaide discloses, wherein said operational data comprises at least one of physical data and virtual data originating from said context, in particular data pertaining to technological states and technological state changes of at least one device operating in said context, in particular at least one device controlled by said information processing device, comprising any of device input states, device output states, device operational states, device game states, computer aided design states, computer simulated design states, computer peripheral device states, computer controlled machinery states and respective state changes (Sajda: Although the disclosed subject matter is discussed within the context of an autonomous vehicle virtual reality game in the exemplary embodiments of the present disclosure, the disclosed system can be applicable to any other environment (e.g., real, virtual, and/or augmented) in which the human user's sensory input is to be used to influence actions, changes, and/or learning in the environment ¶ [0005]. Also see ¶ [0020], [0031], [0033] and Fig. 8).
Regarding claims 9 and 18, the combination of Sajda, Myrden and Alcaide discloses, wherein said information processing device operates a reinforcement machine learning algorithm comprising a reward function, wherein said human mental brain activity data provide said reward function (Sajda: the systems and methods disclosed can use neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation [Abstract], ¶ [0005], [0008], [0020]-[0030]).
Regarding claims 10 and 20, the combination of Sajda, Myrden and Alcaide discloses, wherein said machine learning algorithm comprises an artificial general intelligence data processing algorithm (Sajda: Furthermore, the systems and methods disclosed can use neural, behavioral, and/or physiological signals to inform deep reinforcement learning based AI systems to enhance user comfort and/or trust in automation ¶ [0005]-[0007]. Also see ¶ [0062]-[0065]).
Regarding claim 11, the combination of Sajda, Myrden and Alcaide discloses, identifying, by said information processing device, an aspect of said context, said aspect identified from sensing human mental engagement with said context (Sajda: By integrating physiological signals that can infer brain state based on a fusion of modalities (e.g., other sensory input signals) other than direct measurement of brain activity, the disclosed system can be a hybrid BCI (hBCI) ¶ [0029], [0032]. The human-machine interaction that communicates passenger preferences to the AI agent can be implicit and via the hBCI ¶ [0056], [0064]);
acquiring, by said information processing device, operational data pertaining to said identified aspect (Sajda: The system includes a machine learning module operatively connected to the at least one sensor and configured to process sensory information for the user from the at least one sensor in response to the environment, and a controller operatively connected to the machine learning module. The machine learning module includes a processing circuit, a hybrid human brain computer interface (hBCI) module, and a reinforcement learning module ¶ [0008]);
acquiring, by said information processing device, human mental brain activity data of implicit human participation with said context pertaining to said identified aspect (Sajda: In some embodiments, a deep reinforcement AI agent can receive, as inputs, physiological signals from the human user and driving performance information associated with the simulated vehicle in the virtual environment. In some embodiments, the hBCI can also include a head-mounted display (e.g., a commercially available Oculus Rift, HTC Vive, etc.) and a plurality of actuators. The deep reinforcement AI learner can transmit instructions to the actuators in the hBCI's head-mounted display to perform certain actions ¶ [0025].);
processing, by said machine learning algorithm, said acquired operational data and mental state data assessed from said acquired human mental brain activity data (Sajda: FIG. 2 is a block diagram illustrating a system level diagram of the disclosed hBCI system 200. The hBCI system 200 can include one or more sensory input devices 202 as shown in FIG. 1, a machine learning module 210 that can process input signals from sensory input device(s) 202 to reinforce driving behavior using a deep reinforcement network, and an environment module 204 that can generate a virtual environment in which the AI agent can drive a simulated autonomous vehicle using the determined reinforced driving behavior ¶ [0033]), and
controlling, by said information processing device, at least one of analysis and inferencing by said machine learning algorithm and aspects of said context based on said data processing (Sajda: The system includes a machine learning module operatively connected to the at least one sensor and configured to process sensory information for the user from the at least one sensor in response to the environment, and a controller operatively connected to the machine learning module ¶ [0008]. Also see ¶ [0047]).
Regarding claim 12, the combination of Sajda, Myrden and Alcaide discloses, wherein said information processing device operates at least one further algorithm for at least one of sensing, performing, acquiring, processing, and pre-processing of at least one of brain activity data, operational data, human mental engagement, aspect identification, mental state data, human bio-signal data and control of said machine learning algorithm and aspects of said context (Sajda: The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle ¶ [0020]. The disclosed system can physiologically correlate such labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of the three-dimensional virtual environment. For example, neural and ocular signals reflecting subjective assessment of objects in a three-dimensional virtual environment can be used to inform a graph-based learning model of that virtual environment, resulting in an hBCI system that can customize navigation and information delivery specific to the passenger's interests. The physiological signals that were naturally evoked by virtual objects in this task can be classified by an hBCI system which can include a hierarchy of linear classifiers, as illustrated in FIG. 3. The hBCI system's linear classifiers along with Computer Vision (CV) features of the virtual objects can be used as inputs to a CV system (e.g., TAG module) ¶ [0040]).
Regarding claim 16, the combination of Sajda, Myrden and Alcaide discloses, A system comprising: a processor; memory; a context sensor; and a scalp electroencephalogram; wherein the processor runs a machine learning algorithm for at least one of human knowledge, human intelligence, human subjective interpretations and human advise about tasks, processes, devices, and information perceived by a human participating in a real-life context from the memory comprising processing operational data comprising at least one of sensed, perceived, obtained or otherwise identified physical data, and using virtual data and environmental information originating from said real-life context from the context sensor and human mental brain activity data relating to implicit human participation with said real-life context and provided by the scalp electroencephalogram (Sajda: The disclosed systems and methods provide a hybrid brain-computer-interface (hBCI) that can detect an individual's implicit reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in response objects, events, and/or actions in an environment by generating implicit reinforcement signals for improving an AI agent controlling, by way of example only, a simulated autonomous vehicle. As utilized herein, an “environment” can include a real-world environment, a virtual reality environment, and/or an augmented reality environment. Additionally, the disclosed subject matter provides systems and methods for using a brain-artificial intelligence (AI) interface to use neurophysiological signals to help deep reinforcement learning based AI systems adapt to human expectations as well as increase task performance. The disclosed subject matter can provide access to the cognitive state of humans, such as through real-time electroencephalogram (EEG) based neuroimaging, and can be used to understand implicit non-verbal cues in real-time. Furthermore, the systems and methods disclosed can use neurophysiological signals to inform deep reinforcement learning based AI systems to enhance user comfort and/or trust in automation ¶ [0020]. The subject can act immediately upon some of these assessments and/or decisions, but several of these assessments and/or decisions based on the encountered virtual objects can become mental notes or fleeting impressions (e.g., the subject's implicit labeling of the virtual environment). The disclosed system can physiologically correlate such labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of the three-dimensional virtual environment. For example, neural and ocular signals reflecting subjective assessment of objects in a three-dimensional virtual environment can be used to inform a graph-based learning model of that virtual environment, resulting in an hBCI system that can customize navigation and information delivery specific to the passenger's interests. The physiological signals that were naturally evoked by virtual objects in this task can be classified by an hBCI system which can include a hierarchy of linear classifiers, as illustrated in FIG. 3. The hBCI system's linear classifiers along with Computer Vision (CV) features of the virtual objects can be used as inputs to a CV system (e.g., TAG module) ¶ [0040]).
continuously monitored by a passive brain-computer interface operating at least one respective classifier (Myrdan: Passive brain-computer interfaces may provide a way to complement and stabilize these traditional systems. Embodiments described herein can provide a passive brain-computer interface that uses electroencephalography to monitor changes in mental state on a single-trial basis. An example experiment recorded cortical activity from 15 locations while 11 able-bodied adults completed a series of challenging mental tasks. Using a feature clustering process to account for redundancy in EEG signal features, embodiments classified self-reported changes in fatigue, frustration, and attention levels with 74.8±9.1%, 71.6±5.6%, and 84.8±7.4% accuracy, respectively. Based on the most frequently-selected features across all participants, embodiments can have frontal and central electrodes for fatigue detection, posterior alpha band and frontal beta band activity for frustration detection, and posterior alpha band activity for attention detection. In some embodiments, these results can be integrated with an active brain-computer interface ¶ [0097]. FIG. 4 is a view of an example interface application 130. In some embodiments, interface application 130 includes a classification device 120. In some embodiments, interface application 130 is connected to a headset associated with or housing a BCI platform 110 and classification device 120. The headset may include multiple electrodes 52 to collect EEG data when connected to a user's scalp. In some embodiments, the headset may comprise an in-ear EEG device as described in U.S. application No. 62/615,108, titled “In-Ear EEG Device and Brain-Computer Interfaces” and filed Jan. 9, 2018, which is incorporated herein by reference. The signals may be collected by signal collection unit 134, which may connect to BCI platform 110 housed within the headset. The BCI platform 110 can create and/or use one or more classifiers as described above. For example, the BCI platform 110 within a headset 140 can train and retrain a classifier using EEG data from one or more sessions from a single user engaged with interface application 130 or headset 140. BCI platform 110 can use the classifier to classify mental states of the user using further EEG signals. BCI platform 110 may be operable as described above ¶ [0124]. Also see ¶ [0104], [0105], [0125], [0133], [0192] and [0199]).
Regarding claim 19, the combination of Sajda, Myrden and Alcaide discloses, wherein the reinforcement machine learning algorithm is a deep reinforcement machine learning algorithm comprising a reward function, wherein said human mental brain activity data provide said reward function (Sajda: the systems and methods disclosed can use neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation [Abstract], ¶ [0005], [0008], [0020]-[0030]).
Conclusion
The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
2/19/2026
/MOHAMMAD S ROSTAMI/Primary Examiner, Art Unit 2154