DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the claims filed on 13 November 2024. Claims 1-6 amended preliminarily. Claims 1-6 are currently pending and have been examined.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "a target person" in lines 8 and 10. It is not clear if this is the same or a different target person already recited earlier in the claim. Appropriate correction is required. Claims 2-6 inherit this deficiency.
Claim 1 recites the limitation "a mood score" in line 14. It is not clear if this is the same or a different mood score already recited in the claim. Appropriate correction is required. Claims 2-6 inherit this deficiency.
Claim 1 recites the limitation "a voice" in line 21. It is not clear if this is the same or a different voice already recited in the claim. Appropriate correction is required. Claims 2-6 inherit this deficiency.
Claim 2 recites the limitation "an electroencephalogram of a person" in line 7. It is not clear if this is the same or a different electroencephalogram of a person already recited in the claim. Appropriate correction is required.
Claim 2 recites the limitation "a voice" in lines 7, 18, 22, 42, and 47-48. It is not clear if this is the same or a different voice already recited in the claim. Appropriate correction is required.
Claim 2 recites the limitation "a mood score" in line 14. It is not clear if this is the same or a different mood score already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 2 recites the limitation "a category" in lines 12, 43, and 48. It is not clear if this is the same or a different category already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 2 recites the limitation "an encephalogram feature" in lines 15-16 and 43. It is not clear if this is the same or a different encephalogram feature already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 2 recites the limitation "an average of a plurality of electroencephalograms" in lines 20-21 and 24-25. It is not clear if this is the same or a different average of a plurality of electroencephalograms already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 3 recites the limitation "a voice" in lines 7, 10, 15, 20, and 28. It is not clear if this is the same or a different voice already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 4 recites the limitation "a voice" in line 5-6, 7, 14, and 17. It is not clear if this is the same or a different voice already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 5 recites the limitation "a person" in line 4. It is not clear if this is the same or a different person already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 5 recites the limitation "a sentence" in line 4. It is not clear if this is the same or a different sentence already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim 5 recites the limitation "a voice" in lines 4, 10, and 18-19. It is not clear if this is the same or a different voice already recited in the claim (including independent claim 1). Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 USC § 101
Step 1: Is the claim to a process, machine, manufacture, or composition of matter?
Claims 1-6 fall within one or more statutory categories. Claims 1-6 fall within the category of a manufacture.
Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Claims 1-6 recite an abstract idea. Representative claim 1 recites:
a target person electroencephalogram acquisition step of acquiring a target person electroencephalogram which is an electroencephalogram of a target person when listening to a voice uttering a sentence;
an electroencephalogram encoding step of generating an electroencephalogram feature from an electroencephalogram of a person when listening to a voice uttering a sentence, the electroencephalogram encoding step generating a target person electroencephalogram feature as the electroencephalogram feature from the target person electroencephalogram; and
an estimation step of estimating a target person mood score which is a mood score indicating a level of depressed mood of the target person by inputting the target person electroencephalogram feature to an estimation model,
the estimation model receiving at least the electroencephalogram feature as an input and estimating a mood score indicating a level of depressed mood of the person.
Therefore, the claim as a whole is directed to “estimating patient depression,” which is an abstract idea because it is a method of organizing human activity. “Estimating patient depression” is considered to be a method of organizing human activity because it is an example of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The broadest reasonable interpretation of the claims include the interaction between a healthcare provider and a patient in the form of a healthcare provider interpreting the results of an EEG.
Alternatively, the claims are considered to be directed to a mental process because they include concepts capable of being performed in the human mind (including an observation, evaluation, judgment, opinion), with the aid of pen and paper.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application?
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the following additional element(s):
the estimation model being generated by performing machine learning using a plurality of training data sets, each of the plurality of training data sets being formed by associating a subject mood score which is the mood score indicating a level of depressed mood of a learning subject with at least a subject electroencephalogram feature which is the electroencephalogram feature generated from an electroencephalogram of the learning subject when listening to a voice uttering a sentence,
performing the machine learning including a training step of training the estimation model so that the mood score estimated by the estimation model when the subject electroencephalogram feature is received as an input matches the subject mood score for each of the plurality of training data sets.
The additional elements individually or in combination do not integrate the exception into a practical application. These additional elements, the training and use of a machine learning model, amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 1 is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Claim 1 does not include additional elements, considered individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s), individually and in combination, amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Accordingly, claim 1 is ineligible.
Dependent claim 2 recites the method of claim 1, wherein:
in the electroencephalogram encoding step, generating an electroencephalogram feature corresponding to a category into which the sentence is classified, as the electroencephalogram feature, based on an electroencephalogram of a person when listening to a voice uttering a sentence classified into any of at least three categories of negative, neutral, and positive, and information indicating into which of the at least three categories the sentence is classified,
the estimation model estimating the mood score when an electroencephalogram feature corresponding to a category into which the sentence is classified is input as the electroencephalogram feature,
the subject electroencephalogram feature including a subject first electroencephalogram feature which is an electroencephalogram feature corresponding to the category of negative generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of negative,
a subject second electroencephalogram feature which is an electroencephalogram feature corresponding to the category of neutral generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of neutral, and
a subject third electroencephalogram feature which is an electroencephalogram feature corresponding to the category of positive generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of positive,
performing the machine learning including, for each of the plurality of training data sets,
a first training step of training the estimation model so that the mood score estimated by the estimation model when the subject first electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of negative matches the subject mood score,
a second training step of training the estimation model so that the mood score estimated by the estimation model when the subject second electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of neutral matches the subject mood score, and
a third training step of training the estimation model so that the mood score estimated by the estimation model when the subject third electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of positive matches the subject mood score,
the program causing the computer to further perform a classification information acquisition step of acquiring classification information indicating into which of the at least three categories a sentence the target person is listening to as a voice is classified, and
in the estimation step, estimating the target person mood score by inputting to the estimation model, as an electroencephalogram feature corresponding to a category indicated by the classification information,
the target person electroencephalogram feature corresponding to a category indicated by the classification information generated, in the electroencephalogram encoding step,
based on the target person electroencephalogram of the target person when listening to a voice uttering a sentence classified into a category indicated by the classification information and the classification information.
The additional elements present in this claim merely recites the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). These types of additional elements are not enough to integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Accordingly, claim 2 is ineligible.
Dependent claim 3 recites the method of claim 1, wherein:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response to a word of a person based on an electroencephalogram of the person when listening to a voice uttering a sentence and a start point of each word included in the sentence that the person is listening to as a voice,
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the learning subject generated based on an electroencephalogram of the learning subject when listening to a voice uttering a sentence and a start point of each word included in the sentence that the learning subject is listening to as a voice,
the program causing the computer to further perform an onset information acquisition step of acquiring onset information indicating a start point of each word included in a sentence that the target person is listening to as a voice, and
in the estimation step, estimating the target person mood score by inputting to the estimation model, as the target person electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the target person generated, in the electroencephalogram encoding step, based on the target person electroencephalogram and a start point of each word included in a sentence that the target person is listening to as a voice indicated by the onset information.
The additional elements present in this claim merely recites the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). These types of additional elements are not enough to integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Accordingly, claim 3 is ineligible.
Dependent claim 4 recites the method of claim 1, wherein:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response following a voice envelope of a person based on an electroencephalogram of the person when listening to a voice uttering a sentence and the voice envelope of the voice that the person is listening to,
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the learning subject generated based on an electroencephalogram of the learning subject when listening to a voice uttering a sentence and a voice envelope of the voice that the learning subject was listening to,
the program causing the computer to further perform an envelope information acquisition step of acquiring envelope information indicating a voice envelope of a voice the target person is listening to, and
in the estimation step, estimating the target person mood score by inputting to the estimation model, as the target person electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the target person generated, in the electroencephalogram encoding step, based on the target person electroencephalogram and a voice envelope of a voice the target person is listening to indicated by the envelope information.
The additional elements present in this claim merely recites the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). These types of additional elements are not enough to integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Accordingly, claim 4 is ineligible.
Dependent claim 5 recites the method of claim 1, wherein:
the estimation model further receives, as an input, a subjective score indicating subjective evaluation felt by a person for a sentence after listening to a voice uttering the sentence, in addition to the electroencephalogram feature, and
estimates the mood score based on the electroencephalogram feature and the subjective score that are input,
each of the plurality of training data sets is formed by associating the subject mood score with the subject electroencephalogram feature and a subject subjective score which is the subjective score indicating subjective evaluation felt by the learning subject for the sentence after listening to a voice uttering the sentence,
performing the machine learning includes a training step of training the estimation model so that the mood score estimated by the estimation model when the subject electroencephalogram feature and the subject subjective score are input matches the subject mood score for each of the plurality of training data sets,
the program causing a computer to further perform a target person subjective score acquisition step of acquiring a target person subjective score which is the subjective score indicating subjective evaluation that the target person felt for the sentence after listening to a voice uttering the sentence, and
in the estimation step, estimating the target person mood score by inputting to the estimation model, the target person electroencephalogram feature and the target person subjective score.
The additional elements present in this claim merely recites the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). These types of additional elements are not enough to integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Accordingly, claim 5 is ineligible.
Dependent claim 6 recites the method of claim 1, comprising:
an output step of outputting, to the target person, information corresponding to the target person mood score estimated in the estimation step.
This merely further limits the abstract idea of claim 1 discussed above and does not provide further additional elements. Therefore, claim 6 is considered to be ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Garten et al. (U.S. 2019/0387998), hereinafter “Garten,” in view of Yang et al. (U.S. 2017/0238858), hereinafter “Yang.”
Regarding Claim 1, Garten discloses a non-transitory computer-readable medium storing a mood estimation program, the program configured to cause a computer to perform:
a target person electroencephalogram acquisition step of acquiring a target person electroencephalogram (See Garten [0086] the one or more sensors include bio-signal sensors, such as electroencephalogram (EEG) sensors. [1007] EEG features include the amplitude of the peak power and frequency emitted by the user in the alpha range.) which is an electroencephalogram of a target person when listening to a voice uttering a sentence (See Garten [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5. [0267] Some music databases may not use EEG or other bio-signal data but nevertheless have associated a mood or feeling with a particular music item, such as a song.);
an electroencephalogram encoding step of generating an electroencephalogram feature from an electroencephalogram of a person when listening to a voice uttering a sentence, the electroencephalogram encoding step generating a target person electroencephalogram feature as the electroencephalogram feature from the target person electroencephalogram (See Garten [0339] collect EEG features of a person while listening to a sound, such as music. An algorithm pipeline ID is chosen to pre-process the EEG, extract features.) ; and
an estimation step of estimating a target person mood score … (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions).) by inputting the target person electroencephalogram feature to an estimation model (See Garten [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5.) ,
the estimation model receiving at least the electroencephalogram feature as an input (See Garten [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5.) and estimating a mood score… (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). Fig. 1 and [0279] the system uses an emotion scale that includes depressed. [0891] when a person brainwaves are classified into a specific type of emotion and level of emotion.),
the estimation model being generated by performing machine learning using a plurality of training data sets (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [1119] Training examples can be obtained across hundreds or thousands of users. The model can be general to a population, to a sub-group (i.e. genre) or to an individual.),
each of the plurality of training data sets being formed by associating a subject mood score which is the mood score indicating a level of depressed mood of a learning subject (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). Fig. 1 and [0279] the system uses an emotion scale that includes depressed.) with at least a subject electroencephalogram feature which is the electroencephalogram feature generated from an electroencephalogram of the learning subject when listening to a voice uttering a sentence (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5. [0267] Some music databases may not use EEG or other bio-signal data but nevertheless have associated a mood or feeling with a particular music item, such as a song.),
performing the machine learning including a training step of training the estimation model (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [1119] Training examples can be obtained across hundreds or thousands of users. The model can be general to a population, to a sub-group (i.e. genre) or to an individual.) so that the mood score estimated by the estimation model when the subject electroencephalogram feature is received as an input matches the subject mood score for each of the plurality of training data sets (See Garten [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5. [0267] Some music databases may not use EEG or other bio-signal data but nevertheless have associated a mood or feeling with a particular music item, such as a song.).
Garten does not disclose:
the mood score indicating a level of depressed mood of the person.
Yang teaches:
the mood score indicating a level of depressed mood of the person (See Yang [0006] the system uses collected EEG to further assess whether the subject suffers from depression and to assess the depression level.).
The system of Yang is applicable to the disclosure of Garten as they both share characteristics and capabilities, namely, they are directed to using EEG to measure emotional state. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Garten to include depression levels as taught by Yang. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Garten in order to overcome the weaknesses and defects of the existing depression assessment technology (see Yang [0006]).
Regarding claim 2, Garten in view of Yang discloses the medium of claim 1 as discussed above. Garten further discloses a medium, wherein:
in the electroencephalogram encoding step, generating an electroencephalogram feature corresponding to a category into which the sentence is classified, as the electroencephalogram feature, based on an electroencephalogram of a person when listening to a voice uttering a sentence classified into any of at least three categories of negative, neutral, and positive, and information indicating into which of the at least three categories the sentence is classified (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
the estimation model estimating the mood score when an electroencephalogram feature corresponding to a category into which the sentence is classified is input as the electroencephalogram feature (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
the subject electroencephalogram feature including a subject first electroencephalogram feature which is an electroencephalogram feature corresponding to the category of negative generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of negative (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
a subject second electroencephalogram feature which is an electroencephalogram feature corresponding to the category of neutral generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of neutral (See Garten [0279] emotion categorizing including negative and positive. This is understood to also include neutral. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.), and
a subject third electroencephalogram feature which is an electroencephalogram feature corresponding to the category of positive generated from an average of a plurality of electroencephalograms each of which is an electroencephalogram of the learning subject when listening to a voice uttering a sentence classified into the category of positive (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
performing the machine learning including, for each of the plurality of training data sets (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
a first training step of training the estimation model so that the mood score estimated by the estimation model when the subject first electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of negative matches the subject mood score (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
a second training step of training the estimation model so that the mood score estimated by the estimation model when the subject second electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of neutral matches the subject mood score (See Garten [0279] emotion categorizing including negative and positive. This is understood to also include neutral. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.), and
a third training step of training the estimation model so that the mood score estimated by the estimation model when the subject third electroencephalogram feature is input as the electroencephalogram feature corresponding to the category of positive matches the subject mood score (See Garten [0279] emotion categorizing including negative and positive. [0339] incoming audio that the user hears is classified per unit time as well using an audio analyzer that extracts features of the sound. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
the program causing the computer to further perform a classification information acquisition step of acquiring classification information indicating into which of the at least three categories a sentence the target person is listening to as a voice is classified (See Garten [1072] training the combined features of both biological and sonic parameters simultaneously. [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.), and
in the estimation step, estimating the target person mood score by inputting to the estimation model, as an electroencephalogram feature corresponding to a category indicated by the classification information (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.),
the target person electroencephalogram feature corresponding to a category indicated by the classification information generated, in the electroencephalogram encoding step (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.), based on the target person electroencephalogram of the target person when listening to a voice uttering a sentence classified into a category indicated by the classification information and the classification information (See Garten [1069] Passages of music with known emotional tone can be used as labelled training data for supervised machine learning.).
Regarding claim 5, Garten in view of Yang discloses the medium of claim 1 as discussed above. Garten further discloses a medium, wherein:
the estimation model further receives, as an input, a subjective score indicating subjective evaluation felt by a person for a sentence after listening to a voice uttering the sentence, in addition to the electroencephalogram feature (See Garten [0268] This disclosure also may add EEG data of the user as additional training data to songs that have been labelled by the user as evoking a particular emotion, through the user self-reporting the emotion either through the above questions, or by tagging a song manually. See also [0285].), and
estimates the mood score based on the electroencephalogram feature (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). Fig. 1 and [0279] the system uses an emotion scale that includes depressed.) and the subjective score that are input (See Garten [0268] This disclosure also may add EEG data of the user as additional training data to songs that have been labelled by the user as evoking a particular emotion, through the user self-reporting the emotion either through the above questions, or by tagging a song manually See also [0285].),
each of the plurality of training data sets is formed by associating the subject mood score with the subject electroencephalogram feature and a subject subjective score which is the subjective score indicating subjective evaluation felt by the learning subject for the sentence after listening to a voice uttering the sentence (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0268] add EEG data of the user as additional training data to songs that have been labelled by the user as evoking a particular emotion, through the user self-reporting the emotion.),
performing the machine learning includes a training step of training the estimation model so that the mood score estimated by the estimation model when the subject electroencephalogram feature and the subject subjective score are input matches the subject mood score for each of the plurality of training data sets (See Garten [0268] add EEG data of the user as additional training data to songs that have been labelled by the user as evoking a particular emotion, through the user self-reporting the emotion.),
the program causing a computer to further perform a target person subjective score acquisition step of acquiring a target person subjective score which is the subjective score indicating subjective evaluation that the target person felt for the sentence after listening to a voice uttering the sentence (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song.), and
in the estimation step, estimating the target person mood score by inputting to the estimation model, the target person electroencephalogram feature (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0268] add EEG data of the user as additional training data to songs that have been labelled by the user as evoking a particular emotion, through the user self-reporting the emotion.) and the target person subjective score (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song.).
Regarding claim 6, Garten in view of Yang discloses the medium of claim 1 as discussed above. Garten further discloses a medium, comprising:
an output step of outputting, to the target person, information corresponding to the target person mood score estimated in the estimation step (See Garten [0211] A rules engine output, for example, a music recommendation, may be made on the basis of a user's emotional response, determined as described herein.).
Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Garten et al. (U.S. 2019/0387998), hereinafter “Garten,” in view of Yang et al. (U.S. 2017/0238858), hereinafter “Yang,” and further in view of Schiff et al. (U.S. 2020/0012346), hereinafter “Schiff.”
Regarding claim 3, Garten in view of Yang discloses the medium of claim 1 as discussed above. Garten further discloses a medium, wherein:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature … based on an electroencephalogram of the person when listening to a voice uttering a sentence and a start point of each word included in the sentence that the person is listening to as a voice (See Garten [0276] measure EEG at predetermined times and time stamp from song emotional responses. [1007] EEG features include the amplitude of the peak power and frequency emitted by the user in the alpha range.) ,
the subject electroencephalogram feature … based on an electroencephalogram of the learning subject when listening to a voice uttering a sentence and a start point of each word included in the sentence that the learning subject is listening to as a voice (See Garten [0276] measure EEG at predetermined times and time stamp from song emotional responses. [1007] EEG features include the amplitude of the peak power and frequency emitted by the user in the alpha range.) ,
the program causing the computer to further perform an onset information acquisition step of acquiring onset information indicating a start point of each word included in a sentence that the target person is listening to as a voice (See Garten [0276] measure EEG at predetermined times and time stamp from song emotional responses.), and
in the estimation step, estimating the target person mood score by inputting to the estimation model, as the target person electroencephalogram feature (See Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions).), … based on the target person electroencephalogram and a start point of each word included in a sentence that the target person is listening to as a voice indicated by the onset information (See Garten [0276] measure EEG at predetermined times and time stamp from song emotional responses. [1007] EEG features include the amplitude of the peak power and frequency emitted by the user in the alpha range.) .
Garten does not disclose:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response to a word of a person,
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the learning subject generated,
at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the target person generated, in the electroencephalogram encoding step.
Schiff teaches:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response to a word of a person (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.),
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the learning subject generated (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.),
at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response to the word of the target person generated, in the electroencephalogram encoding step (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.).
The system of Schiff is applicable to the disclosure of Garten in view of Yang as they both share characteristics and capabilities, namely, they are directed to using electroencephalogram data to estimate emotion. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Garten to include latency, amplitude, and sound envelop data as taught by Schiff. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Garten in order to directly measure the neural response of subjects to specific attributes in audio-visual stimuli (see Schiff [0002]).
Regarding claim 4, Garten in view of Yang discloses the medium of claim 1 as discussed above. Garten further discloses a medium, wherein:
in the estimation step, estimating the target person mood score by inputting to the estimation model, as the target person electroencephalogram feature (See Garten [0084] By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a database of music associated with emotions. See also Fig. 5. Garten [0276] system may determine the user's emotional response once, after a predetermined time has passed while playing a song. One or more of the detected emotional responses of the user may then be associated with the song. [0100] The system may provide for: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions). Fig. 1 and [0279] the system uses an emotion scale that includes depressed.).
Garten does not disclose:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response following a voice envelope of a person based on an electroencephalogram of the person when listening to a voice uttering a sentence and the voice envelope of the voice that the person is listening to,
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the learning subject generated based on an electroencephalogram of the learning subject when listening to a voice uttering a sentence and a voice envelope of the voice that the learning subject was listening to,
the program causing the computer to further perform an envelope information acquisition step of acquiring envelope information indicating a voice envelope of a voice the target person is listening to, and
at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the target person generated, in the electroencephalogram encoding step, based on the target person electroencephalogram and a voice envelope of a voice the target person is listening to indicated by the envelope information.
Schiff teaches:
in the electroencephalogram encoding step, generating, as the electroencephalogram feature, at least one of a peak latency and an average amplitude before and after a peak of a predetermined component in an electroencephalogram response following a voice envelope of a person based on an electroencephalogram of the person when listening to a voice uttering a sentence and the voice envelope of the voice that the person is listening to (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.),
the subject electroencephalogram feature being at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the learning subject generated based on an electroencephalogram of the learning subject when listening to a voice uttering a sentence and a voice envelope of the voice that the learning subject was listening to (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.),
the program causing the computer to further perform an envelope information acquisition step of acquiring envelope information indicating a voice envelope of a voice the target person is listening to (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.), and
at least one of a peak latency and an average amplitude before and after a peak of the predetermined component in an electroencephalogram response following the voice envelope of the target person generated, in the electroencephalogram encoding step, based on the target person electroencephalogram and a voice envelope of a voice the target person is listening to indicated by the envelope information (See Schiff [0073] system uses EEG to measure emotional response to stimuli. [0032] EEG signal features of the resulting sensory evoked response such as latencies of peaks, peak amplitudes, polarities, and spatial distribution may be measured. Schiff [0042] cross-correlation segment between the natural speech envelope and the EEG neural response across the time points and averaging across the segments.).
The system of Schiff is applicable to the disclosure of Garten in view of Yang as they both share characteristics and capabilities, namely, they are directed to using electroencephalogram data to estimate emotion. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Garten to include latency, amplitude, and sound envelop data as taught by Schiff. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Garten in order to directly measure the neural response of subjects to specific attributes in audio-visual stimuli (see Schiff [0002]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Parra et al. (U.S. 2021/0022637) teaches a system and method for predicting efficacy of stimulus by measuring physiological response to stimuli.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN L HANKS whose telephone number is (571)270-5080. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.L.H./Examiner, Art Unit 3684
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684