DETAILED ACTION
Claims 1, 3-11, 14-18, 20-27, and 29-30 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 U.S.C. § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The invention, as taught in Claims 1, 3-11, 14-18, 20-27, and 29-30, is directed to “mental steps” and “mathematical steps” without significantly more.
The claims recite:
• electroencephalogram (EEG) data measured from sensors (i.e., mathematical steps)
• environmental data describing stimulus the user is exposed to concurrently with the measurement of the EEG data (i.e., mathematical steps)
• environmental data describes what the user is seeing concurrently with the measurement of the EEG data (i.e., mathematical or mental steps)
• at least one machine learning model (i.e., mathematical steps)
• training (i.e., mathematical steps)
• training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants (i.e., mathematical steps)
• determine an inference related to the neural activity (i.e., mental steps)
• the inference is the user’s selection of an item from a menu of options (i.e., mental steps)
Claim 1
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “1. A computer-implemented method for decoding neural activity, comprising…” Therefore, it is a “method” (or “process”), which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES”.
Step 2A (Prong One) inquiry:
Are there limitations in Claim 1 that recite abstract ideas?
YES. The following limitations in Claim 1 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• electroencephalogram (EEG) data measured from sensors (i.e., mathematical steps)
• environmental data describing stimulus the user is exposed to concurrently with the measurement of the EEG data (i.e., mathematical steps)
• environmental data describes what the user is seeing concurrently with the measurement of the EEG data (i.e., mathematical or mental steps)
• at least one machine learning model (i.e., mathematical steps)
• training (i.e., mathematical steps)
• training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants (i.e., mathematical steps)
• determine an inference related to the neural activity (i.e., mental steps)
• the inference is the user’s selection of an item from a menu of options (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "electroencephalogram (EEG) data"
(2) A "receiving" of "environmental data"
(3) An "inputting" of "the EEG data and the environmental data"
(4) An "altering" of "an operation of a computer program"
A "receiving" of "electroencephalogram (EEG) data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "electroencephalogram (EEG) data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "inputting" of "the EEG data and the environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "altering" of "an operation of a computer program" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "electroencephalogram (EEG) data"
(2) A "receiving" of "environmental data"
(3) An "inputting" of "the EEG data and the environmental data"
(4) An "altering" of "an operation of a computer program"
A "receiving" of "electroencephalogram (EEG) data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);…
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 1 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 2
Claim 2 recites:
2. The method of claim 1, wherein the environmental data describes what the user is seeing concurrently with the measurement of the EEG data.
Applicant’s Claim 2 merely teaches pure mathematical measurement data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 2 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 3
Claim 3 recites:
3. The method of claim 1, wherein the environmental data describes what the user is hearing concurrently with the measurement of the EEG data.
Applicant’s Claim 3 merely teaches pure mathematical sound measurement data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 3 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 4
Claim 4 recites:
4. The method of claim 1, wherein the environmental data describes natural language the user is exposed to concurrently with the measurement of the EEG data.
Applicant’s Claim 4 merely teaches “environmental data describes natural language the user is exposed to…”, which is mental steps. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 4 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 5
Claim 5 recites:
5. The method of claim 1, further comprising:
(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data,
wherein the inputting (c) comprises inputting the behavior data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional concurrently collected behavior data collected from the data collection participants.
Applicant’s Claim 5 merely teaches the receipt of data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 5 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 6
Claim 6 recites:
6. The method of claim 1, further comprising:
(f) receiving registration data describing information about the user that the user self-reported during registration of an account for the user,
wherein the inputting (c) comprises inputting the registration data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional registration data concurrently collected from the data collection participants.
Applicant’s Claim 6 merely teaches the receipt of data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 6 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 7
Claim 7 recites:
7. The method of claim 1, wherein the environmental data comprises visual data describing what the user is seeing concurrently with the measurement of the EEG data, and the at least one machine learning model comprises:
a neural activity encoder configured to receive the EEG data and generate a first output data based on the EEG data;
a visual encoder configured to receive the visual data and generate a second output data based on the environmental data; and
a multi-modal decoder configured to receive the first and second output data and generate the inference based on the first and second output data.
Applicant’s Claim 7 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques. Applicant's Specification recites the following:
[0051] Like diagram 100 in figure 1, brain 104 perceives world 102 and EEG information is captured as neural activity 106. Neural activity 106 is input into neural activity encoder 208. Neural activity encoder 208 accepts time series of EEG data captured. Neural activity encoder 208 encodes the EEG data X into f1(X). This is similar to visual encoder 204 and audio encoder 205. Neural activity encoder 208 is at least a portion of the machine learning algorithm. In an embodiment, neural activity encoder 208 may be at least a portion of a deep learning neural network, such as a convolutional neural networks (CNN) or recurrent neural networks (RNNs), such as long short-term memory (LSTM) and gated recurrent units (GRUs).
***
[0056] Visual encoder 204 is at least a portion of the machine learning algorithm. In an embodiment, visual encoder 204 may be at least a portion of a deep learning neural network. In one example, visual encoder 204 may be a transformer neural network. A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. In other examples, visual encoder 204 may a transformer, CNN, or RNN.
***
[0065] Multi-modal decoder 210 is at least a portion of the machine learning algorithm. In an embodiment, neural activity encoder 204 may be at least a portion of a deep learning neural network configured to conduct a multimodal determination. One example algorithm is a multi-input transformer, CNN, or RNN.
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 7 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 8
Claim 8 recites:
8. The method of claim 7, wherein the environmental data comprises audio data describing what the user is hearing concurrently, and the at least one machine learning model further comprises:
an audio encoder configured to receive the audio data and generate a third output data based on the environmental data, wherein the multi-modal decoder is further configured to receive the third output data and generate the inference based on the third output data.
Applicant’s Claim 8 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques.
[0057] In addition to visual stimuli, other stimuli may be input into the machine learning algorithm illustrated in figure 2. In particular, audio stimuli is input as illustrated by YA. Similar to visual encoder 204, audio encoder 205 is at least a portion of the machine learning algorithm. In an embodiment, audio encoder 205 may be at least a portion of a deep learning neural network, such as a transformer, CNN, or RNN.
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 8 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 9
Claim 9 recites:
9. The method of claim 8, further comprising:
(g) receiving registration data describing information about the user that the user self-reported during registration of an account for the user, and the at least one machine learning model further comprises:
a subject metadata encoder configured to receive the registration data and
to generate a fourth output data based on the registration data the multi-modal decoder is further configured to receive the fourth output data and generate the inference based on the fourth output data.
Applicant’s Claim 9 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 9 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 10
Claim 10 recites:
10. The method of claim 7, further comprising:
(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data, and the at least one machine learning model further comprises:
a behavior encoder configured to receive the behavior data and generate a fifth output data based on the behavior data, wherein the multi-modal decoder is further configured to receive the fifth output data and generate the inference based on the fifth output data.
Applicant’s Claim 10 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques.
[0076] As with visual encoder 204 and audio encoder 205, subject metadata encoder 306 and behavior encoder 308 may be at least a portion of a machine learning model.
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 10 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 11
Claim 11 recites:
11. The method of claim 10, wherein the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi-modal decoder each comprise at least a portion of a deep learning network, further comprising:
(h) backpropagating the training data set through the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi- modal decoder together to train the at least one machine learning model.
Applicant’s Claim 11 merely teaches mathematical backpropagation (i.e., training). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 11 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 12
Claim 12 recites:
12. The method of claim 1, wherein the inference is the user’s selection of an item from a menu of options.
Applicant’s Claim 12 merely teaches a user’s selection, which is a mental step. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 12 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 13
Claim 13 recites:
13. The method of claim 1, wherein the inference is authentication of the user’s identity.
Applicant’s Claim 13 merely teaches an authentication of identity, which is a mental step. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 13 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 14
Claim 14 recites:
14. The method of claim 1, wherein the sensors are affixed to a headset presented to the user.
Applicant’s Claim 14 merely teaches a well-understood placement (on the skin of the head) of well-understood voltage sensors.
[0090] Figures 7A-D each illustrate a head mounted device with EEG sensors which allows performance of a neural decoding procedure. Figure 7A illustrates a mixed reality device 700 including a front display portion 708 and a strap 706. Front display portion 708 and strap 706 can include EEG sensors 702 and 704.
***
[0093] EEG sensors 704 include electrodes sitting on the user’s head, in particular the user’s forehead, side, and back of the head. The electrodes can be configured to extend through the hair. For example, EEG sensors 704 may be comb electrodes. They may have depth sufficient to go through hair like a comb and touch the scalp. EEG sensors 704 sit on the forehead or the temples where the user typically lacks much hair. The electrodes measure voltage changes on the skin. EEG sensors 704 each may include a small chip that has some electronics, such as an analog-to-digital converter, that connects to the bus of the system.
[0094] The voltage measured on the head by EEG sensors 704 emanate from electrical dipoles resulting from the brain’s electrical activity. EEG sensors 704 are configured to detect signals from the brain of the user. They may also detect other signals from other biosignal sources like the muscles in in the user’s face. For example, users have large jaw muscles, which are activated when a subject makes facial gestures, speaks, or chews. The activation of these muscles changes the electrical field which can be measured by the electrodes on the subject’s head. In another example, the user has eye muscles, and the eye itself can be modeled as an electrical dipole. As the eye rotates, this electrical dipole changes the electrical field measured by the EEG sensors 704.
[0095] In addition to mixed reality device 700 in figure 7A, EEG sensors may be integrated into the arms of smart or augmented reality glasses 750 in figure 7B. In contrast to mixed reality device 700, augmented reality glasses 750 are designed to fit as much capability into a form factor and weight that is typical of conventional glasses and sunglasses. A computer with components described below with respect to figure 21 may be located inside the arms of these glasses. EEG sensors may be located within an arm 756 on each side of augmented reality glasses 750, or may be located with a strap that runs behind or around user’s head and attaches to the respective arms of augmented reality device 750.
[0096] In addition to head mounted devices, EEG sensors may be integrated into a band or strap that goes on the forehead, back of the head, crown of the head, or completely surrounding the head as illustrated by head strap 760 in figure 7C.
[0097] In addition to the above, EEG sensors may be integrated into headphones. EEG sensors may be integrated into the soft cup and upper band of over-the-ear headphones 770 in figure 7D, or located within or around the ear by integration with earbud headphones.
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 14 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 15
Claim 15 recites:
15. The method of claim 1, wherein the stimulus comprises a menu including a plurality of options to control the computer program and wherein the inference is a selection by the user an item from the menu of options.
Applicant’s Claim 15 merely teaches mental steps (i.e., the user’s selection of an option in a menu) practiced on a computer. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 15 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 16
Claim 16 recites:
16. The method of claim 15, wherein each of the plurality of options varies in presentation according to a specific temporal profile.
Applicant’s Claim 16 merely teaches mental steps (i.e., the user’s selection of an option in a menu) practiced on a computer. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 16 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 17
Claim 17 recites:
17. The method of claim 16, wherein the specific temporal profile is different for each of the plurality of options.
Applicant’s Claim 17 merely teaches a collection of data (i.e., the temporal profile). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 17 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 18
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “18. A non-transitory, tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations decoding neural activity, the operations comprising…” Therefore, it is NOT a “non-transitory computer-readable medium” (or “product of manufacture”.) Therefore, it is not a statutory category of invention. Therefore, the answer to the inquiry is: “NO”.
Step 2A (Prong One) inquiry:
Are there limitations in Claim 18 that recite abstract ideas?
YES. The following limitations in Claim 18 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• electroencephalogram (EEG) data measured from sensors (i.e., mathematical steps)
• environmental data describing stimulus the user is exposed to concurrently with the measurement of the EEG data (i.e., mathematical steps)
• environmental data describes what the user is seeing concurrently with the measurement of the EEG data (i.e., mathematical and mental steps)
• at least one machine learning model (i.e., mathematical steps)
• training (i.e., mathematical steps)
• training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants (i.e., mathematical steps)
• determine an inference related to the neural activity (i.e., mental steps)
• the inference is the user’s selection of an item from a menu of options (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "electroencephalogram (EEG) data"
(2) A "receiving" of "environmental data"
(3) An "inputting" of "the EEG data and the environmental data"
(4) An "altering" of "an operation of a computer program"
A "receiving" of "electroencephalogram (EEG) data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "electroencephalogram (EEG) data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "inputting" of "the EEG data and the environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "altering" of "an operation of a computer program" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "electroencephalogram (EEG) data"
(2) A "receiving" of "environmental data"
(3) An "inputting" of "the EEG data and the environmental data"
(4) An "altering" of "an operation of a computer program"
A "receiving" of "electroencephalogram (EEG) data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);…
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 18 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 19
Claim 19 recites:
19. The device of claim 18, wherein the environmental data describes what the user is seeing concurrently with the measurement of the EEG data.
Applicant’s Claim 19 merely teaches pure mathematical measurement data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 19 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 20
Claim 20 recites:
20. The device of claim 18, wherein the environmental data describes what the user is hearing concurrently with the measurement of the EEG data.
Applicant’s Claim 20 merely teaches pure mathematical sound measurement data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 20 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 21
Claim 21 recites:
21. The device of claim 18, the operations further comprising:
(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data,
wherein the inputting (c) comprises inputting the behavior data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional concurrently collected behavior data collected from the data collection participants.
Applicant’s Claim 21 merely teaches receipt of data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 21 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 22
Claim 22 recites:
22. The device of claim 18, the operations further comprising:
(f) receiving registration data describing information about the user that the user self-reported during registration of an account for the user,
wherein the inputting (c) comprises inputting the registration data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional registration data concurrently collected from the data collection participants.
Applicant’s Claim 22 merely teaches the receipt of data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 22 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 23
Claim 23 recites:
23. The device of claim 18, wherein the environmental data comprises visual data describing what the user is seeing concurrently with the measurement of the EEG data, and the at least one machine learning model comprises:
a neural activity encoder configured to receive the EEG data and generate a first output data based on the EEG data;
a visual encoder configured to receive the visual data and generate a second output data based on the environmental data; and
a multi-modal decoder configured to receive the first and second output data and generate the inference based on the first and second output data.
Applicant’s Claim 23 merely teaches pure mathematical calculations. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 23 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 24
Claim 24 recites:
24. The device of claim 23, wherein the environmental data comprises audio data describing what the user is hearing concurrently, and the at least one machine learning model further comprises:
an audio encoder configured to receive the audio data and generate a third output data based on the environmental data, wherein the multi-modal decoder is further configured to receive the third output data and generate the inference based on the third output data.
Applicant’s Claim 24 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 24 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 25
Claim 25 recites:
25. The device of claim 24, the operations further comprising:
(g) receiving registration data describing information about the user that the user self-reported during registration of an account for the user, and the at least one machine learning model further comprises:
a subject metadata encoder configured to receive the registration data and to generate a fourth output data based on the registration data
the multi-modal decoder is further configured to receive the fourth output data and generate the inference based on the fourth output data.
Applicant’s Claim 25 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 25 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 26
Claim 26 recites:
26. The device of claim 25, the operations further comprising:
(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data, and the at least one machine learning model further comprises:
a behavior encoder configured to receive the behavior data and generate a fifth output data based on the behavior data, wherein the multi-modal decoder is further configured to receive the fifth output data and generate the inference based on the fifth output data.
Applicant’s Claim 26 merely teaches the receipt of data and the mathematical operation of calculating data using generic machine learning techniques.
[0076] As with visual encoder 204 and audio encoder 205, subject metadata encoder 306 and behavior encoder 308 may be at least a portion of a machine learning model.
It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 26 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 27
Claim 27 recites:
27. The device of claim 25, wherein the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi-modal decoder each comprise at least a portion of a deep learning network, the operations further comprising:
(h) backpropagating the training data set through the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi- modal decoder together to train the at least one machine learning model.
Applicant’s Claim 27 merely teaches mathematical backpropagation (i.e., training). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 27 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 28
Claim 28 recites:
28. The device of claim 18, wherein the inference is the user’s selection of an item from a menu of options.
Applicant’s Claim 28 merely teaches a user’s selection, which is a mental step. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 28 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 29
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “29. A headset for decoding neural activity, comprising…” Therefore, it is a “headset”, which is NOT a statutory category of invention. Therefore, the answer to the inquiry is: “NO”.
Step 2A (Prong One) inquiry:
Are there limitations in Claim 29 that recite abstract ideas?
YES. The following limitations in Claim 29 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• at least one machine learning model (i.e., mathematical steps)
• training (i.e., mathematical steps)
• training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants (i.e., mathematical steps)
• determine an inference related to the neural activity (i.e., mental steps)
• the inference is the user’s selection of an item from a menu of options (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "sensor"
(2) A "processor"
(3) A "memory"
(4) A "receiving" of "environmental data"
(5) An "inputting" of "the EEG data and the environmental data"
(6) A "controlling" of "a computer program"
A “sensor” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to sense/receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This “sensor” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A “processor” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
This “processor” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A “memory” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
***
iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;
This “memory” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "inputting" of "the EEG data and the environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "controlling" of "a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "control a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "controlling" of "a computer program" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "sensor"
(2) A "processor"
(3) A "memory"
(4) A "receiving" of "environmental data"
(5) An "inputting" of "the EEG data and the environmental data"
(6) A "controlling" of "a computer program"
A “sensor” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to sense/receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A “processor” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A “memory” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
***
iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "inputting" of "the EEG data and the environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "controlling" of "a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "control a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 29 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 30
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “30. A computer-implemented method for decoding neural activity, comprising…” Therefore, it is a “method” (or “process”), which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES”.
Step 2A (Prong One) inquiry:
Are there limitations in Claim 30 that recite abstract ideas?
YES. The following limitations in Claim 30 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• neural data and the environmental data (i.e., mathematical steps)
• at least one machine learning model (i.e., mathematical steps)
• training (i.e., mathematical steps)
• training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants (i.e., mathematical steps)
• determine an inference related to the neural activity (i.e., mental steps)
• the inference is the user’s selection of an item from a menu of options (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "neural data"
(2) A "receiving" of "environmental data"
(3) An "altering" of "an operation of a computer program"
A "receiving" of "neural data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "neural data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving" of "environmental data" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "altering" of "an operation of a computer program" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "receiving" of "neural data"
(2) A "receiving" of "environmental data"
(3) An "altering" of "an operation of a computer program"
A "receiving" of "neural data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "receiving" of "environmental data" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
An "altering" of "an operation of a computer program" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Merely using the conventional computer to "alter an operation of a computer program" is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 30 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-11, 14, 18, 20-27, and 29-30 are rejected under 35 U.S.C. § 103 as being unpatentable over Koctúrová, et al., A Novel Approach to EEG Speech Activity Detection with Visual Stimuli and Mobile BCI, Applied Sciences 2021, 11, 674, 12 JAN 2021, pp. 1-12, in view of Zhang, et al., A Calibration-Free Hybrid BCI Speller System Based on High-Frequency SSVEP and sEMG, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 31, pp. 3492-3500, 04 SEP 2003, in their entireties. Specifically:
Claim 1
Claim 1's ''(a) receiving electroencephalogram (EEG) data measured from sensors attached to or near a user ’s head;'' is taught by Koctúrová, et al., page 4, Figure 1, where it shows sensor placement on the head.
Claim 1's ''(b) receiving environmental data describing stimulus the user is exposed to concurrently with the measurement of the EEG data'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 1's ''wherein the environmental data describes what the user is seeing concurrently with the measurement of the EEG data'' is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 1's ''(c) inputting the EEG data and the environmental data into at least one machine learning model to determine an inference related to the neural activity, the at least one machine learning model trained using a training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants; and'' is taught by Koctúrová, et al., page 5, first two full paragraphs, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
For the experiment, 9 basic features was assembled based on which the models in the shallow Feed-Forward Artificial Neural Network was created. The important part of the signal processing was the conversion of the signal into minimum-phase signal. Ttures were calculated for the raw EEG signal as well as for the minimum-phase signal. In summary, for the total of 16 recorded EEG channels in the experiment a calculation was completed for ehe assigned feaach channel identifying 9 features from the raw signal and 9 features for the minimum-phase signal. All input features composed an input vector of size 288.
Further, the training/inference is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 1's ''the inference is the user’s selection of an item from a menu of options” is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 1's ''(d) based on the inference, altering an operation of a computer program.'' is taught by Koctúrová, et al., Page 9, last full paragraph, where it recites:
In this part we selected data sets from combination of 3 subjects which were divided into 80% training set and 20% validation set. The created model was tested (i.e., “altering an operation of a computer program”) on the data set from the 4th subject.
Claim 3
Claim 3's ''The method of claim 1, wherein the environmental data describes what the user is hearing concurrently with the measurement of the EEG data.'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 4
Claim 4's ''The method of claim 1, wherein the environmental data describes natural language the user is exposed to concurrently with the measurement of the EEG data.'' is taught by Koctúrová, et al., Page 3, first full paragraph, where it recites:
As mentioned we expected increased activity of the two main areas of the brain during EEG signal acquisition which are the speech area, and the visual area. During EEG recording the subjects were presented with both images with showing specific colours and the text forms of individual colours. By presenting the subjects with these images their visual and imaginary parts of the brain became activated. The important fact to realized is that the brain areas which control what we see and what we can imagine are the same. When communicating, we often involve these visual areas in addition to speech areas. The relationship between imagination and vision is very close and hence these two actions elicit similar signals. The main difference between visual and mental images lies in the communication of the cerebral pathway, which started from the eye, leads to the primary visual cortex [11,12]. The study in Reference [13] assumes that image recognition and image naming involves the activity of occipitotemporal and prefrontal areas.
Claim 5
Claim 5's ''The method of claim 1, further comprising: (e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data,'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together (i.e., “behavior data”… the user was listening and producing EEG responses). The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 5's ''wherein the inputting (c) comprises inputting the behavior data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional concurrently collected behavior data collected from the data collection participants.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 6
Claim 6's ''The method of claim 1, further comprising: (f) receiving registration data describing information about the user that the user self-reported during registration of an account for the user,'' is taught by Koctúrová, et al., page 4, second full paragraph, where it recites:
The EEG signals was recorded on four healthy, right-handed native Slovak speaking subjects. They signed an informed consent form in which they were acquainted with the purpose of the experiment and with the use and management of the personal data. The research was carried out in accordance with the Code of Ethics for employees of the Technical University in Košice.
Claim 6's ''wherein the inputting (c) comprises inputting the registration data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional registration data concurrently collected from the data collection participants.'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio (i.e., Slovak speech) and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 7
Claim 7's ''a neural activity encoder configured to receive the EEG data and generate a first output data based on the EEG data;'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Further, the labels are taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 7's ''a visual encoder configured to receive the visual data and generate a second output data based on the environmental data; and'' is taught by Koctúrová, et al., Page 3, first full paragraph, where it recites:
As mentioned we expected increased activity of the two main areas of the brain during EEG signal acquisition which are the speech area, and the visual area. During EEG recording the subjects were presented with both images with showing specific colours and the text forms of individual colours. By presenting the subjects with these images their visual and imaginary parts of the brain became activated. The important fact to realized is that the brain areas which control what we see and what we can imagine are the same. When communicating, we often involve these visual areas in addition to speech areas. The relationship between imagination and vision is very close and hence these two actions elicit similar signals. The main difference between visual and mental images lies in the communication of the cerebral pathway, which started from the eye, leads to the primary visual cortex [11,12]. The study in Reference [13] assumes that image recognition and image naming involves the activity of occipitotemporal and prefrontal areas.
Claim 7's ''a multi-modal decoder configured to receive the first and second output data and generate the inference based on the first and second output data.'' is taught by Koctúrová, et al., Page 4, third full paragraph, where it recites:
During the experiment the subjects followed an experimental protocol. Their task was to sit motionlessly in a comfortable position while focusing on the screen in front of them and processing the word pronunciation. During the EEG signal recording the audio signal was also gathered, which were used for speech labels creating.
Further, the inference generation is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 8
Claim 8's ''an audio encoder configured to receive the audio data and generate a third output data based on the environmental data, wherein the multi-modal decoder is further configured to receive the third output data and generate the inference based on the third output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 9
Claim 9's ''a subject metadata encoder configured to receive the registration data and'' is taught by Koctúrová, et al., page 4, second full paragraph, where it recites:
The EEG signals was recorded on four healthy, right-handed native Slovak speaking subjects. They signed an informed consent form in which they were acquainted with the purpose of the experiment and with the use and management of the personal data. The research was carried out in accordance with the Code of Ethics for employees of the Technical University in Košice.
Further, the encoding is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio (i.e., Slovak speech) and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 9's ''to generate a fourth output data based on the registration data the multi-modal decoder is further configured to receive the fourth output data and generate the inference based on the fourth output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 10
Claim 10's ''(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data, and the at least one machine learning model further comprises:'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together (i.e., “behavior data”… the user was listening and producing EEG responses). The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 10's ''a behavior encoder configured to receive the behavior data and generate a fifth output data based on the behavior data, wherein the multi-modal decoder is further configured to receive the fifth output data and generate the inference based on the fifth output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 11
Claim 11's ''(h) backpropagating the training data set through the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi- modal decoder together to train the at least one machine learning model.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 14
Claim 14's ''The method of claim 1, wherein the sensors are affixed to a headset presented to the user.'' is taught by Koctúrová, et al., page 2, first partial paragraph, where it recites:
In the work, the 4-channel Muse EEG headset and a 128 channel EGI device were used.
Claim 18
Claim 18's ''(a) receiving electroencephalogram (EEG) data measured from sensors attached to or near a user’s head;'' is taught by Koctúrová, et al., page 4, Figure 1, where it shows sensor placement on the head.
Claim 18's ''(b) receiving environmental data describing stimulus the user is exposed to concurrently with the measurement of the EEG data;'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 18's ''wherein the environmental data describes what the user is seeing concurrently with the measurement of the EEG data'' is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 18's ''(c) inputting the EEG data and the environmental data into at least one machine learning model to determine an inference related to the neural activity, the at least one machine learning model trained using a training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants; and'' is taught by Koctúrová, et al., page 5, first two full paragraphs, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
For the experiment, 9 basic features was assembled based on which the models in the shallow Feed-Forward Artificial Neural Network was created. The important part of the signal processing was the conversion of the signal into minimum-phase signal. Ttures were calculated for the raw EEG signal as well as for the minimum-phase signal. In summary, for the total of 16 recorded EEG channels in the experiment a calculation was completed for ehe assigned feaach channel identifying 9 features from the raw signal and 9 features for the minimum-phase signal. All input features composed an input vector of size 288.
Further, the training/inference is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 18's ''the inference is the user’s selection of an item from a menu of options” is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 18's ''(d) based on the inference, altering an operation of a computer program.'' is taught by Koctúrová, et al., Page 9, last full paragraph, where it recites:
In this part we selected data sets from combination of 3 subjects which were divided into 80% training set and 20% validation set. The created model was tested (i.e., “altering an operation of a computer program”) on the data set from the 4th subject.
Claim 20
Claim 20's ''The device of claim 18, wherein the environmental data describes what the user is hearing concurrently with the measurement of the EEG data.'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 21
Claim 21's ''(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data,'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together (i.e., “behavior data”… the user was listening and producing EEG responses). The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 21's ''wherein the inputting (c) comprises inputting the behavior data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional concurrently collected behavior data collected from the data collection participants.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 22
Claim 22's ''(f) receiving registration data describing information about the user that the user self-reported during registration of an account for the user,'' is taught by Koctúrová, et al., page 4, second full paragraph, where it recites:
The EEG signals was recorded on four healthy, right-handed native Slovak speaking subjects. They signed an informed consent form in which they were acquainted with the purpose of the experiment and with the use and management of the personal data. The research was carried out in accordance with the Code of Ethics for employees of the Technical University in Košice.
Claim 22's ''wherein the inputting (c) comprises inputting the registration data into the at least one machine learning model to determine the inference related to the neural activity and wherein the training data set used to train the at least one machine learning model comprises additional registration data concurrently collected from the data collection participants.'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio (i.e., Slovak speech) and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 23
Claim 23's ''a neural activity encoder configured to receive the EEG data and generate a first output data based on the EEG data;'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Further, the labels are taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 23's ''a visual encoder configured to receive the visual data and generate a second output data based on the environmental data; and'' is taught by Koctúrová, et al., Page 3, first full paragraph, where it recites:
As mentioned we expected increased activity of the two main areas of the brain during EEG signal acquisition which are the speech area, and the visual area. During EEG recording the subjects were presented with both images with showing specific colours and the text forms of individual colours. By presenting the subjects with these images their visual and imaginary parts of the brain became activated. The important fact to realized is that the brain areas which control what we see and what we can imagine are the same. When communicating, we often involve these visual areas in addition to speech areas. The relationship between imagination and vision is very close and hence these two actions elicit similar signals. The main difference between visual and mental images lies in the communication of the cerebral pathway, which started from the eye, leads to the primary visual cortex [11,12]. The study in Reference [13] assumes that image recognition and image naming involves the activity of occipitotemporal and prefrontal areas.
Claim 23's ''a multi-modal decoder configured to receive the first and second output data and generate the inference based on the first and second output data.'' is taught by Koctúrová, et al., Page 4, third full paragraph, where it recites:
During the experiment the subjects followed an experimental protocol. Their task was to sit motionlessly in a comfortable position while focusing on the screen in front of them and processing the word pronunciation. During the EEG signal recording the audio signal was also gathered, which were used for speech labels creating.
Further, the inference generation is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 24
Claim 24's ''an audio encoder configured to receive the audio data and generate a third output data based on the environmental data, wherein the multi-modal decoder is further configured to receive the third output data and generate the inference based on the third output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 25
Claim 25's ''(g) receiving registration data describing information about the user that the user self-reported during registration of an account for the user, and the at least one machine learning model further comprises:'' is taught by Koctúrová, et al., page 4, second full paragraph, where it recites:
The EEG signals was recorded on four healthy, right-handed native Slovak speaking subjects. They signed an informed consent form in which they were acquainted with the purpose of the experiment and with the use and management of the personal data. The research was carried out in accordance with the Code of Ethics for employees of the Technical University in Košice.
Claim 25's ''a subject metadata encoder configured to receive the registration data and to generate a fourth output data based on the registration data'' is taught by Koctúrová, et al., page 4, second full paragraph, where it recites:
The EEG signals was recorded on four healthy, right-handed native Slovak speaking subjects. They signed an informed consent form in which they were acquainted with the purpose of the experiment and with the use and management of the personal data. The research was carried out in accordance with the Code of Ethics for employees of the Technical University in Košice.
Further, the encoding is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio (i.e., Slovak speech) and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 25's ''the multi-modal decoder is further configured to receive the fourth output data and generate the inference based on the fourth output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 26
Claim 26's ''(e) receiving behavior data describing what the user is doing concurrently with the measurement of the EEG data, and the at least one machine learning model further comprises:'' is taught by Koctúrová, et al., page 5, first full paragraph, where it recites:
4. Methods
The audio and EEG signals were recorded together (i.e., “behavior data”… the user was listening and producing EEG responses). The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
Claim 26's ''a behavior encoder configured to receive the behavior data and generate a fifth output data based on the behavior data, wherein the multi-modal decoder is further configured to receive the fifth output data and generate the inference based on the fifth output data.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 27
Claim 27's ''The device of claim 25, wherein the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi-modal decoder each comprise at least a portion of a deep learning network, the operations further comprising:'' is taught by Koctúrová, et al., page 11, fourth full paragraph, where it recites:
Our results for the cross-subject speech detection achieved better results than initially taught. Based on our results we plan future work focused on the creation of an optimal model for the speech detection by involving a larger number of subjects and using deep machine learning algorithm. Such speech detection model could then be used to create a speech recognition application suitable for mobile EEG devices.
Claim 27's ''(h) backpropagating the training data set through the neural activity encoder, visual encoder, audio encoder, subject metadata encoder, behavior encoder, and multi- modal decoder together to train the at least one machine learning model.'' is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 29
Claim 29's ''a sensor in contact with a user’s head to collect electroencephalogram (EEG) data;'' is taught by Koctúrová, et al., page 4, Figure 1, where it shows sensor placement on the head.
Claim 29's ''a processor; and'' is taught by Koctúrová, et al., page 3, third full paragraph, where it recites:
3.1. Materials
Although considerable research has been done in modeling human speech detection and recognition, it is difficult to use this information to create a speech recognition model that works on a mobile EEG device. Working with mobile devices is often more difficult due to the inferior signal acquisition conditions. However, mobile EEG devices bring many benefits to end-users such as lower price, easier connection, easier to place on the head or the use of dry electrodes, to name few. In our research the database of EEG signals was recorded using the OpenBci Ultracortex Mark III EEG headset. Bluetooth signal was used to both control the mobile EEG head cap via a computer and to transfer the recorded EEG signal. The sampling frequency of 125 Hz was used to capture the brain signal via the EEG headset.
Claim 29's ''a memory with instructions thereon to cause the processor to (i) receive environmental data describing stimulus the user is visually exposed to concurrently with the collection of the EEG data” is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 29's ''(ii) input the EEG data and the environmental data into at least one machine learning model to determine an inference related to neural activity, the at least one machine learning model trained using a training data set of additional EEG data and additional concurrently collected environmental data collected from data collection participants” is taught by Koctúrová, et al., page 5, first two full paragraphs, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
For the experiment, 9 basic features was assembled based on which the models in the shallow Feed-Forward Artificial Neural Network was created. The important part of the signal processing was the conversion of the signal into minimum-phase signal. Ttures were calculated for the raw EEG signal as well as for the minimum-phase signal. In summary, for the total of 16 recorded EEG channels in the experiment a calculation was completed for ehe assigned feaach channel identifying 9 features from the raw signal and 9 features for the minimum-phase signal. All input features composed an input vector of size 288.
Further, the training/inference is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 29's ''the inference is the user’s selection of an item from a menu of options” is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 29's ''(iii) control a computer program using the inference.'' is taught by Koctúrová, et al., Page 9, last full paragraph, where it recites:
In this part we selected data sets from combination of 3 subjects which were divided into 80% training set and 20% validation set. The created model was tested (i.e., “altering an operation of a computer program”) on the data set from the 4th subject.
Claim 30
Claim 30's ''(a) receiving neural data measured from sensors;'' is taught by Koctúrová, et al., page 4, Figure 1, where it shows sensor placement on the head.
Claim 30's ''(b) receiving environmental data describing stimulus the user is visually exposed to concurrently with the measurement of the neural data;'' is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 30's ''(c) inputting the neural data and the environmental data into at least one machine learning model to determine an inference related to the neural activity, the at least one machine learning model trained using a training data set of additional neural data…and additional concurrently collected environmental data collected from data collection participants; and'' is taught by Koctúrová, et al., page 5, first two full paragraphs, where it recites:
4. Methods
The audio and EEG signals were recorded together. The audio recording was used to create speech and non-speech labels. EEG signals recorded using the graphical user interface OpenBCI, and stored in a text file format, where the individual channels are written in columns. The file format also contains an additional column which contains the information about the Unix time of recording each sample. The main challenge was in the synchronisation of the recorded EEG and audio signals in the post processing. To solve is was created a simple script which, after starting the audio recording, automatically stored the Unix time of record start. This was used to synchronize an audio with EEG in post-processing.
For the experiment, 9 basic features was assembled based on which the models in the shallow Feed-Forward Artificial Neural Network was created. The important part of the signal processing was the conversion of the signal into minimum-phase signal. Ttures were calculated for the raw EEG signal as well as for the minimum-phase signal. In summary, for the total of 16 recorded EEG channels in the experiment a calculation was completed for ehe assigned feaach channel identifying 9 features from the raw signal and 9 features for the minimum-phase signal. All input features composed an input vector of size 288.
Further, the training/inference is taught by Koctúrová, et al., page 8, first full paragraph, where it recites:
4.5. Training Algorithm
We assume that machine learning models such as feed-forward neural network may identify patterns in the preprocessed EEG data to predict the desired speech and non-speech classes. The preprocessed data fed into the feed-forward neural network are transformed by a number of weights which form the output prediction for each class. At the beginning of the model training phase, all model weights are randomly initialized which results in the erroneous output. Later in this phase, the manually labeled classes, which are paired with the input data, are fed to the feed-forward neural network model. The error backpropagation algorithm is used to optimize the model weights in order to reduce the output error.
Claim 30's ''the inference is the user’s selection of an item from a menu of options” is not expressly taught by Koctúrová, et al. It is, however, taught by Zhang, et al., Fig. 1 and Fig. 2A where it shows a visual menu of letters and a SSVEP (steady-state visual evoked potential) EEG data path.
Rationale -- It would have been obvious for one of ordinary skill in the art, at the time of the effective filing date, to combine the visually/sEMG selected menu system of Zhang, et al. with the EEG system of Koctúrová, et al. because it would enable the user to make selections using the eyes and would add to the flexibility to the system.
Claim 30's ''(d) based on the inference, altering an operation of a computer program.'' is taught by Koctúrová, et al., Page 9, last full paragraph, where it recites:
In this part we selected data sets from combination of 3 subjects which were divided into 80% training set and 20% validation set. The created model was tested (i.e., “altering an operation of a computer program”) on the data set from the 4th subject.
Response to Arguments
Applicant's arguments filed 18 FEB 2026 have been fully considered but they are not persuasive. Specifically, Applicant argues:
Argument 1
As discussed in the Reply dated January 14, 2026, any alleged abstract idea recited by the Office Action is integrated into the practical application. The specification recites a new technology that provides an improvement to conventional approaches to control a computer program (see Specification, paragraph [0113]), analogous to Core Wireless Licensing S.A.R.L. v. LG Elecs., Inc., 880 F.3d 1356. In particular, the new technology interprets EEG information to dissect the user's intention and consequently creates inferences that enable the control of computer functions, allowing a user device to navigate within the computer program based on a neural activity such as thinking of a word. Id. In this way, the new technology controls the computer program to activate different actions (e.g., to select an item from a menu of options) based on the EEG signals. See Specification, paragraphs [0134] and [0138].
In response to this argument in the most recent Office Action, the Examiner alleges that "Regarding any practical application, Applicant has not shown where in the claims they are taught." (Office Action, pp. 101-102.) However, the claims clearly show this practical application. They recite that "inputting the EEG data, the environmental data, and a label of the EEG data into at least one machine learning model to determine an inference related to the neural activity wherein the inference is the user's selection of an item from a menu of options; and based on the inference, altering an operation of a computer program to cause the computer program to select the item from the menu of options." Similar to how in Core Wireless Licensing the claims allowed for navigation through a menu on a small screen, the claims here allow for navigation through a menu using EEG data as input. The practical application here is very similar to what the Federal Circuit held to be patent eligible in Core Wireless. For that reason, the claims are patent eligible under Step 2A prong 2 because any alleged abstract idea recited by the Office Action is integrated into the practical application.
Thus, for at least the foregoing reasons, the instant claims are patent-eligible, and the rejection under 35 U.S.C. § 101 should be reconsidered and withdrawn.
In the USPTO Memorandum dated 02 APR 2018, the Office stated the following:
In Core Wireless Licensing S.A.R.L., v. LG Electronics, Inc., 880 F.3d 1356 (Fed. Cir. 2018), the claimed invention involves a graphical user interface (GUI) for mobile devices that displays an application summary of each application on the main menu while those applications are in an unlaunched state. The claims to computing devices were held patent eligible because the court concluded that they are directed to an improved user interface for electronic devices, not to the abstract idea of an index. In particular, the claims contain precise language delimiting the type of data to be displayed and how to display it, thus improving upon conventional user interfaces to increase the efficiency of using mobile devices. Finding the claims eligible, the court compared the improved user interface in the patent claims to the improved systems claimed in Enfish, Thales, Visual Memory, and Finjan.
Applicant's claims are not analogous to the claims in Core Wireless and do not contain the kinds of limitations that caused the claims in that case to be eligible.
Applicant's argument is unpersuasive.
The rejections stand.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiries concerning this communication or earlier communications from the examiner should be directed to Wilbert L. Starks, Jr., who may be reached Monday through Friday, between 8:00 a.m. and 5:00 p.m. EST. or via telephone at (571) 272-3691 or email: Wilbert.Starks@uspto.gov.
If you need to send an Official facsimile transmission, please send it to (571) 273-8300.
If attempts to reach the examiner are unsuccessful the Examiner’s Supervisor (SPE), Kakali Chaki, may be reached at (571) 272-3719.
Hand-delivered responses should be delivered to the Receptionist @ (Customer Service Window Randolph Building 401 Dulany Street, Alexandria, VA 22313), located on the first floor of the south side of the Randolph Building.
Finally, information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Moreover, status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) toll-free @ 1-866-217-9197.
/WILBERT L STARKS/
Primary Examiner, Art Unit 2122
WLS
31 MAR 2026