DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed November 18, 2025 has been entered. Claims 1-35 remain pending in the application.
Response to Arguments
Applicant’s arguments, see Remarks pages 17-22, filed November 18, 2025, with respect to claims 23 and likewise claims 19-21 have been fully considered and are persuasive. The section 101 rejections of claims 23 and 19-21 has been withdrawn. The wearable head-mounted device provides a specific device that is more than the generic sensors as claimed in claim 1 for example.
Applicant's arguments filed November 18, 2025 have been fully considered but they are not persuasive. With respect to the arguments regarding the section 101 rejections of claims 1-18, 22 and 24-35, the arguments are not persuasive. The claimed eeg sensors are merely generic data gathering and not specific such as in claims 23 and 19-21. Claim 1 then further merely recites data output. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter. The machine learning is input into a generic computer, and does not improve the functioning of the computer, but rather uses a machine learning or statistical model to interpret data and output data. Thus, the arguments are not persuasive.
Applicant's arguments filed November 18, 2025 have been fully considered but they are not persuasive. With respect to the arguments regarding the section 102 rejections, the arguments are not persuasive. Applicant argues that Brunner fails to disclose, “creating data visual guide elements that are displayed to a user within a computer interface; and”. It is interpreted that the visual guide elements are arrows or other markers to help annotate the figures as discussed in pages 11-12 of the present application. FIG. 10-12 and associated paragraphs of Brunner disclose the annotation of figured including markers for visual analysis. See also [0169], “Metadata can be used, for example, to annotate and properly store experimental data in separated subsets, combined separated data streams into one dataset for each subject, to analyze the data according to different factors, to stratify data and the like. Table IV shows other type of important metadata needed to uniquely identify a dataset.” The annotation of the data is used for visual guidance for the analysis. Thus, the arguments are not persuasive.
With respect to the arguments regarding claims 15, 16 and 23, the arguments are not persuasive. Applicant argues that Brunner fails to disclose, “using a machine learning model or statistical model trained on annotated EEG data to receive and process EEG signals and to perform operations of:”. Firstly, it is established above that Brunner discloses annotating EEG data. Brunner further discloses machine learning models trained on annotated eeg data and to interpret the data for output. See [0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. As discussed above, see paragraphs [0160 – 0169] discussing the metadata, the annotations of the data and how this data is used as a dataset in the machine learning models. Additionally, the claims do not limit the machine learning to only have eeg data input, as argued on page 15 of the Remarks. Thus, the arguments are not persuasive.
With respect to the arguments regarding claim 24, the arguments are not persuasive for substantially the same reasons above. Brunner discloses the machine learning model using eeg signals as input. Brunner further uses known characteristics about the eeg signals for training. Thus, the arguments are not persuasive.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18, 22 and 24-35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1 of the subject matter eligibility test (see MPEP 2106.03).
Claim 1 is directed to “a system” which describes one of the four statutory categories of patentable subject matter, i.e. a machine or manufacture.
Claim 15 is directed to “A non-transitory computer-readable medium storing one or more instructions that, when executed by one or more processors, cause the one or more processors to perform operations” which describes one of the four statutory categories of patentable subject matter, i.e. a machine or manufacture.
Claim 16 is directed to “A non-transitory computer-readable medium storing one or more instructions that, when executed by one or more processors, cause the one or more processors to perform operations” which describes one of the four statutory categories of patentable subject matter, i.e. a machine or manufacture.
Claim 18 is directed to “a method” which describes one of the four statutory categories of patentable subject matter, i.e. a process.
Claim 24 is directed to “a method” which describes one of the four statutory categories of patentable subject matter, i.e. a process.
Each of Claims 1-35 has been analyzed to determine whether it is directed to any judicial exceptions.
Step 2A of the subject matter eligibility test (see MPEP 2106.04).
Prong One:
Claims 1, 15-16, 18 and 24 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows: “configured to ingest and process electrical signals from the EEG detector device.”; “accessing one or more databases for quality comparison; assessing the quality of exam data; and extracting or identifying patient stratification or other population analysis.”; “extracting or identifying features from the exam data; extracting or identifying patterns from the exam data; and extracting or identifying patient stratification or other population analysis.”; “extracting or identifying patterns from the exam data; creating a report summarizing the results of exam data analysis;”; “access: a database configured to store quality comparison information; a database configured to store safety signal identification information; a database configured to store efficacy signal identification information; and a database configured to store patient stratification and population analysis information; assess the quality of exam data; extracting or identifying patterns from the exam data; creating a report summarizing results of exam data analysis;” In claims 1, 15-16, 18 and 24, the above recited steps can be practically performed in the human mind, with the aid of a pen and paper or with a generic computer, in a computer environment, or merely using the generic computer as a tool to perform the steps. If a person were to visually examine, i.e., perform an observation, the EEG data, either in a printout or an electronic format, he/she would be able to perform the data analysis via pen and paper. He/she would further be able to, via visual examination, assess the quality of the data and perform a patient stratification or other population analysis. He/she would further be able to visually examine and extracting or identifying features/patterns from the exam data; creating a report summarizing the results of exam data analysis; annotate the data;” There is nothing recited in the claim to suggest an undue level of complexity in how the EEG data is analyzed. Therefore, a person would be able to perform the analysis mentally or with a generic computer.
Prong Two: Claims 1, 15-16, 18 and 24 do not include additional elements that integrate the mental process into a practical application.
This judicial exception is not integrated into a practical application. In particular, the claims recites (1) “an EEG detector device comprising an array of sensors”
(2) “creating data visual guide elements that are displayed to a user within a computer interface; creating data annotations that are stored along with the underlying data in a non-transitory computer-readable medium; and configuring the head-mounted device to provide feedback to one or more users in the form of light, sound, or vibration.”
(3) “one or more processors and one or more computer-readable storage medium having instructions stored thereon” “an analysis pipeline implemented by the one or more processor”; “database” “a machine learning engine”
The steps in (1) represent merely data gathering or pre-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details).
The step in (2) represents merely notification outputting by a processor as a post-solution activity and is recited at a high level of generality.
The steps in (3) merely recite generic computer components used to implement the abstract idea on, as tools.
As a whole, the additional elements merely serve to gather and feed information to the abstract idea and to output a notification based on the abstract idea, while generically implementing it on conventionally used tools. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. No improvement to the technology is evident, and the estimated bio-information is not outputted in any way such that a practical benefit is realized. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application.
Step 2B of the subject matter eligibility test (see MPEP 2106.05).
Claims 1, 15-16, 18 and 24 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claims recite additional steps of (1) “an EEG detector device comprising an array of sensors”
(2) “creating data visual guide elements that are displayed to a user within a computer interface; creating data annotations that are stored along with the underlying data in a non-transitory computer-readable medium; and configuring the head-mounted device to provide feedback to one or more users in the form of light, sound, or vibration.”
(3) “one or more processors and one or more computer-readable storage medium having instructions stored thereon” “an analysis pipeline implemented by the one or more processor”; “database” “a machine learning engine”
These steps represents mere data gathering, data outputting or pre/post/extra-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality.
The data is obtained from EEG sensors. These additional limitations merely represent insignificant, conventional pre-solution activities well-understood in the industry of EEG, as the sensors recited are well understood, routine and conventional, as evidenced by Brunner (US 2017/0249434 A1) (“Brunner”). Brunner discloses the sensors as mapped below in the art rejections. Mere insignificant conventional extra-solution activity cannot provide an inventive concept.
The recited processors and computer-readable storage medium are generic computer elements (i.d. para. [0043], [0050], “Further, it should be appreciated that certain aspects of the system may include a specially-programmed computer, but in some instances, may be a general purpose computer including standard hardware and operating systems, upon which various components described herein may be implemented.” describing generic computers).
Therefore, none of the Claims 1, 15-16, 18 and 23-24 amounts to significantly more than the abstract idea itself.
Accordingly, Claims 1, 15-16, 18 and 23-24 are not patent eligible and rejected under 35 U.S.C. 101 as being directed to abstract ideas implemented on a generic computer in view of the Supreme Court Decision in Alice Corporation Pty. Ltd. v. CLS Bank International, et al. and 2019 PEG.
Dependent Claims
The following dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons:
Claims 2-14, 17, 22 and 25-35 recitations further limits the abstract idea above, and further defines the mental process discussed above; further describe the extra-solution activities and therefore, do not amount to significantly more than the judicial exception or integrate the abstract idea into a practical application for similar reasons; further define the sensors used for insignificant extra-solution activity (data collection). The sensors recited are well understood, routine and conventional, as evidenced by Brunner (US 2017/0249434 A1) (“Brunner”); merely recite data transmission to the output device discussed above as extra-solution activity (data output); or merely recite data storage and architecture which is considered generic computer components.
Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-35 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Brunner (US 2017/0249434 A1) (“Brunner”).
Regarding claim 1, Brunner discloses An electroencephalography (EEG) processing system comprising (Abstract and entire document):
an EEG detector device comprising an array of sensors ([0035], [0118] discussing EEG sensors);
one or more processors; a computer-readable memory coupled to the one or more processors; and an analysis pipeline implemented by the one or more processor configured to ingest and process electrical signals from the EEG detector device ([0021], “a computer system comprising one or more processors and a memory” and [0020] – [0031] discussing receiving data and processing data see also FIG. 1 and associated paragraphs),
the one or more processors being configured to: access: a database configured to store quality comparison information; a database configured to store safety signal identification information; a database configured to store efficacy signal identification information; and a database configured to store patient stratification and population analysis information (FIG. 1 and [0047], “external databases 2” and [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.” And [0157]);
assess the quality of exam data (FIG. 1 and [0047], “external databases 2” and [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.” And [0157]);
extracting or identifying patterns from the exam data ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.”);
creating a report summarizing results of exam data analysis ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.”);
creating data visual guide elements that are displayed to a user within a computer interface ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.” See also fig. 10-12 and associated paragraphs showing a display, with visual guide elements (interpreted as arrows or other markers to help annotate the figures as discussed in pages 11-12 of the present application.); and
creating data annotations that are stored along with underlying data in a non-transitory computer-readable medium ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.”).
Regarding claim 2, Brunner discloses The EEG processing system of claim 1, further comprising one or more computer storage media configured to store one or more from the group comprising: the database configured to store quality comparison; the database configured to store safety signal identification; the database configured to store efficacy signal identification; and the database configured to store patient stratification and population analysis (FIG. 1 and [0047], “external databases 2” and [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.” And [0157]).
Regarding claim 3, Brunner discloses The EEG processing system of claim 1, wherein said one or more processors are further configured to perform operations comprising: extracting or identifying safety signals from the exam data; extracting or identifying efficacy signals from the exam data; and extracting or identifying patient stratification or other population analysis (FIG. 1 and [0047], “external databases 2” and [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.” And [0157]).
Regarding claim 4, Brunner discloses The EEG processing system of claim 1, wherein, responsive to input from a user, the one or more processors are further configured to perform operations comprising of one or more from the group comprising: capturing user-generated data annotations; modifying the data visual guide elements ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.”).
Regarding claim 5, Brunner discloses The EEG processing system of claim 1, wherein the system implements one or more operations from the group comprising: notifying multiple human experts that interpretation is required; enabling said human experts to accept said request for interpretation; determining agreement among interpretations generated by human experts; identifying additional human experts in cases where said interpretation by initial experts are not in agreement; and identifying a consensus interpretation comprised of the interpretations of one or more human experts ([0053], [0085], [0123], ““Qualification”, “stratification” or “annotation” may refer to the addition of metadata that enables use of subject, contextual or environmental data as part of the analysis or that can be utilized to partition the dataset into smaller, more homogeneous subsets.”).
Regarding claim 6, Brunner discloses The EEG processing system of claim 1, wherein: the one or more processors are further configured to perform training of a machine learning or statistical model on data stored in a non-transitory computer-readable medium; the computer-readable memory is further configured to store model architecture, parameter values and any other variables used to implement said machine learning or statistical model ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output).
Regarding claim 7, Brunner discloses The EEG processing system of claim 6, wherein the one or more processors are further configured to implement the machine learning or statistical model on data collected by the array of sensors to generate an interpretation describing one or more of the group comprising: a safety signal; an efficacy signal identification; and a population-level stratification ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification.).
Regarding claim 8, Brunner discloses The EEG processing system of claim 1, wherein the one or more processors are further configured to create a report summarizing results of exam data analysis ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See [0047], “answer output module 18”, “The final optimal answer is available to the user, report generator, or storage through an Answer Output Module 18.”).
Regarding claim 9, Brunner discloses The EEG processing system of claim 1, wherein the one or more processors are further configured to report one or more metrics from the group comprising: a confidence or consensus estimate associated with annotated features; and attribution scores associated with annotated features ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification.).
Regarding claim 10, Brunner discloses The EEG processing system of claim 1, wherein the system implements one or more operations from the group comprising: random interpretation requests are sent to one or more additional human expert; performance of a human expert is evaluated against interpretations generated by said one or more additional human experts; and a quality score or performance metric is generated ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 11, Brunner discloses The EEG processing system of claim 9, wherein, responsive to the confidence or consensus estimate associated with the predictions of the machine learning or statistical model, the one or more processors are further configured to present a human user with data collected by the one or more sensors according to one or more requirements in the group comprising: a predetermined confidence or consensus threshold; a dynamic proportional population-level threshold; the presence of a predetermined class of features; the presence of a predetermined signature; and the assignment of the patient to a predetermined population-level strata ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 12, Brunner discloses The EEG processing system of claim 9, wherein, responsive to the confidence associated with the predictions of the machine learning or statistical model, the one or more processors are further configured to present a human user with visual guide elements configured to identify data from the sensor array that meet criteria for: a predetermined confidence or consensus threshold; a dynamic proportional population-level threshold; the presence of a predetermined class of features; the presence of a predetermined signature; and the assignment of the patient to a predetermined population-level strata ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 13, Brunner discloses The EEG processing system of claim 1, wherein, responsive to a collection of additional data from the one or more examination devices or sensors, the one or more processors are further configured to perform operations comprising of one or more from the group comprising: creating the data visual guide elements that are displayed to the user; and creating the data annotations that are stored along with the underlying data in a non- transitory computer-readable medium ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 14, Brunner discloses The EEG processing system of claim 1, wherein, responsive to input from a user, the one or more processors are further configured to perform operations comprising of one or more from the group comprising: capturing user-generated data annotations; modifying the data visual guide elements; and creating the data annotations that are stored along with the underlying data in a non- transitory computer-readable medium ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claims 15-16, Brunner discloses (the same rejections as applied to claims 1-3 apply to claims 15-16). And further, “using a machine learning model or statistical model trained on annotated EEG data to receive and process EEG signals to perform operations of: assessing the quality of exam data; extracting or identifying patient stratification or other population analysis; and generating an interpretation describing one or more of the group comprising a safety signal, an efficacy signal identification, and a population-level stratification.” ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output.)
Regarding claim 17, Brunner discloses (the same rejections as applied to claim 5 apply to claim 17).
Regarding claim 18, Brunner discloses (the same rejections as applied to claim 5 apply to claim 17) and also that the device is head mounted ([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”)
Regarding claim 19, Brunner discloses The EEG processing system according to claim 1, wherein the EEG detector device is a wearable head-mounted device comprising: a plurality of sensors arranged at different locations, with each sensor configured to capture electrical signals from a portion of a body of an examinee([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”); and
a data acquisition apparatus configured to process electrical signals from the sensors and wirelessly transmit said electrical signals to a receiver device, the receiver device being configured with one or more processors to receive and process data transmitted by the acquisition device([0020 – 0021] discussing processing data, see also [0034] for wireless transmission and see [0157] for further data acquisition and processing).
Regarding claim 20, Brunner discloses The EEG processing system according to claim 19, wherein the wearable head-mounted device is configured to provide feedback to the subject in the form of light, sound, and/or vibration([0210] discussing alarms or feedback to the user).
Regarding claim 21, Brunner discloses The EEG processing system according to claim 19, wherein the head-mounted device is further configured with one or more sensors from the group comprising: an oximeter; a temperature sensor; a gyroscope; a microphone or sound-level meter; an accelerometer; and a heart rate monitor([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”).
Regarding claim 22, Brunner discloses The system according to claim 1, wherein the EEG detector device includes an accelerometer, gyroscope or other sensor capable of detecting position or movement, wherein the data collected by said sensor is supplied to a statistical machine learning model to improve the accuracy of interpretation ([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”)
Regarding claim 23, Brunner discloses An electroencephalography (EEG) detection system comprising(Abstract and entire document):
a wearable head-mounted device, comprising([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”):
a plurality of sensors arranged at different locations on a subject, with each sensor configured to capture EEG signals from the subject([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”); and
a data acquisition device configured to process electrical signals from the sensors and transmit said EEG signals to a receiver configured with one or more processors to receive and process data transmitted by the acquisition device([0020 – 0021] discussing processing data, see also [0034] for wireless transmission and see [0157] for further data acquisition and processing); and
a machine learning engine trained on annotated EEG data configured to receive and process the EEG signals and perform at least one of the group of operations comprising: automatically identify patterns within the received EEG signals; automatically annotate at least a portion of an EEG waveform; control a visual indicator to signal an examiner of the subject of a particular condition of the subject; indicate a quality of the EEG signals; and control a feedback generator to provide feedback to the subject in the form of at least one of the group comprising light, sound, and vibration([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0210] discussing alarms or feedback to the user).
Regarding claim 24, Brunner discloses A method comprising acts of(Abstract and entire document):
obtaining electroencephalography (EEG) signals associated with a group of subjects known to demonstrate specific EEG activity([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”);
training a statistical or machine learning model to perform an objective function in relation to a subject, including training the machine learning model on a plurality of signals including at least the electroencephalography (EEG) signals associated with the group of subjects known to demonstrate the specific EEG activity([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output.);
obtaining electroencephalography (EEG) signals associated with the subject ([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”); and
providing the EEG signals associated with the subject to the statistical or machine learning model to obtain an output ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output.).
Regarding claim 25, Brunner discloses The method according to claim 24, wherein the act of training the statistical or machine learning model further comprises an act of training the statistical or machine learning model to receive one or more user inputs ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0210] discussing alarms or feedback to the user).
Regarding claim 26, Brunner discloses The method according to claim 24, further comprising an act of receiving the one or more user inputs via a computer interface, and wherein the user inputs comprise one or more of: user-provided identification information of EEG signals including: annotations; visual guide elements; data relating to features of interest within EEG signals; corrections of features identified by the statistical or machine learning model ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 27, Brunner discloses The method according to claim 26, wherein the user inputs relate to EEG signals associated with the subject([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 28, Brunner discloses The method according to claim 24, wherein the output includes information to assist an expert user to deliver a clinically-relevant interpretation of the EEG signals associated with the subject([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 29, Brunner discloses The method according to claim 28, wherein the output information includes at least one of a group of information comprising: identification of a diagnostic, prognostic or risk stratification biomarkers; identification of a clinical condition of the subject; an indication of a detection of an adverse effect of an intervention; identification of an efficacy of the intervention; and identification whether the subject is more likely to respond to the intervention ([0009] discussing treatment responses, see also [0039 – 0040]).
Regarding claim 30, Brunner discloses The method according to claim 29, wherein the output is obtained by a system either in real time or asynchronously to receiving and processing the EEG signals associated with the subject by the statistical or machine learning model ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing in real time).
Regarding claim 31, Brunner discloses The method according to claim 24, wherein the act of obtaining EEG signals associated with the subject further comprises an act of collecting, with a wearable head-mounted device, EEG signals from a plurality of sensors arranged at different locations on the subject ([0158], “Gadgets that are carried by the subject can be attached to the clothing, skin, head, and other body parts, injected, ingested or tattooed. Data can be obtained using sensors built into the gadget (such as, but not restricted to, accelerometers and gyroscopes that are included in many wearable devices), sensors that can be added to the wearable device (such as but not restricted to EKG or cardiac monitor, cortisol and glucose skin sensors),”).
Regarding claim 32, Brunner discloses The method according to claim 28, wherein the output information is used to perform one or more operations comprising: determining an automated interpretation; determining a confidence interval or consensus score associated with one or more elements of said interpretation; delivering a fully automated report of interpretation elements where said automated interpretation was associated with a high confidence or consensus estimate; and relaying the primary EEG data to a human expert for interpretation of elements where the automated interpretation was low-confidence ([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Regarding claim 33, Brunner discloses The method according to claim 24, wherein the EEG signals associated with the group of subjects known to demonstrate specific EEG activity includes EEG signals of subjects known to demonstrate epiletiform EEG activity([0009] discussing known conditions and see also [0020 – 0032] and [0116]).
Regarding claim 34, Brunner discloses The method according to claim 24, wherein the EEG signals associated with the group of subjects known to demonstrate specific EEG activity includes EEG signals of subjects known to demonstrate normal EEG activity([0009] discussing known conditions and see also [0020 – 0032] and [0116]).
Regarding claim 35, Brunner discloses The method according to claim 28, wherein the output information includes at least one of a group of information generated by the statistical or machine learning model comprising: annotations within the EEG signals associated with the subject; visual guide elements relating to the EEG signals associated with the subject; data relating to features of interest within the EEG signals associated with the subject; and identification of features within the EEG signals associated with the subject([0053], [0068], [0090 – 0091], and FIG. 1 and associated paragraphs, the data is input into a machine learning model for training and output. See also [0157] discussing stratification. See further [0053] and [0178 – 0180] discussing expert annotation).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A TOMBERS whose telephone number is (571)272-6851. The examiner can normally be reached on M-TH 7:00-16:00, F 7:00-11:00(Eastern).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Chen can be reached on 571-272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.A.T./Examiner, Art Unit 3791
/TSE W CHEN/ Supervisory Patent Examiner, Art Unit 3791