Prosecution Insights
Last updated: April 19, 2026
Application No. 18/744,677

COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMPUTER-READABLE MEDIUM

Non-Final OA §101§103
Filed
Jun 16, 2024
Examiner
PHAM, QUANG
Art Unit
2685
Tech Center
2600 — Communications
Assignee
Yokogawa Electric Corporation
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
380 granted / 699 resolved
-7.6% vs TC avg
Strong +57% interview lift
Without
With
+57.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
46 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
75.5%
+35.5% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 699 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status In the present application, filed on or after March 16, 2013, claims 1-20 have been considered and examined under the first inventor to file provisions of the AIA . Respond to Applicant’s Arguments/Remarks Applicant’s arguments, see Remarks, filed 03/27/2026, with respect to the rejection(s) of claims 1-6 and 12-20 has been fully considered and the results as followings: On pages 10-19 of Applicant’s remarks, Applicant argues that the combination of Read and Aimone does not teach/suggest the limitations of " a first reaction of a communication device based on brainwave information [that] is a simulated expression of a living person [that is] an utterance, an action, and/or a facial expression " because Aimone does not disclose, teach, or suggest the required claim recitation of " a first reaction of a communication device based on brainwave information [that] is a simulated expression of a living person [that is] an utterance, an action, and/or a facial expression " In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In this instant case, Examiner respectfully disagrees with Applicant because as discussed in the Final rejection mailed on 12/29/2025, the rejection relied upon Read to already disclose a control unit (Read: FIG. 1 the headband dual core processor) which controls the communication device according to a first reaction based on brainwave information (Read: [0117], [0129]-[0130], [0133]-[0136], [0150]-[0152], [0155], [0164], [0167]-[0168], and FIG. 1: The one or more biological signals can correspond to a particular mental state of the subject. For example, in a first mental state, the subject may exhibit a first set of biological signals with a first set of characteristics, whereas in a second mental state, the subject may exhibit a second set of biological signals with a second set of characteristics. The characteristics associated with the biological signals may comprise, for example, a wavelength, a frequency, an amplitude, a phase, a center frequency, a phase difference, a variance, a co-variance, or any other physical property associated with the one or more biological signal) except for the limitations of the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression. However, Aimone discloses a wearable device associated with a user comprising a bio-signal sensor including brainwave sensor (Aimone: [0024], [0028], [0041]-[0044], [0053]-[0066], and FIG. 1 the bio-signal sensors: The bio-signal sensor receives bio-signal data from the user, the bio-signal sensor comprising a brainwave sensor. The computing device having or in communication with a processor configured to as part of the interactive VR environment, present content on the display where the content has a VR event, desired user states, and desired effects; receive user manual inputs from the input device which have effects in the interactive VR environment including during the VR event) to determine a first reaction (Aimone: [0113], [0169], and FIG. 10-11: The device measures her brainwaves and changes the characteristics of the virtual pet accordingly to provide visual feedback. For example, the pet changes colour from green to red when Danielle is upset; it changes back from red to green when she enters a relaxed state. Alternatively, the pet can change its own behaviour: irritated, relaxed, angry, etc. The pet can be used as a mindfulness/meditation aide—Danielle tries to get her pet to change colour to a certain state to match whatever mindfulness goals she is aiming towards), wherein the first reaction is a simulated expression of a living person as a virtual person (Aimone: [0167], [0170], and FIG. 10-11: The EEG readings from other members of the group affects the appearance of their avatars. e.g. if the person is relaxed, their avatars can glow blue; if they are tense, their avatars would glow red. The individual participants can look at their own avatars to determine how they are proceeding with the meditation session; their own avatars will change colour like those in the rest of the group. Emotions can also be displayed on the avatars; i.e. the words “ANGRY” or “ANXIOUS” can appear on their faces), and wherein the simulated expression is at least one of an utterance, an action (Aimone: [0113], [0169], and FIG. 10-11: The device measures her brainwaves and changes the characteristics of the virtual pet accordingly to provide visual feedback. For example, the pet changes colour from green to red when Danielle is upset; it changes back from red to green when she enters a relaxed state. Alternatively, the pet can change its own behaviour: irritated, relaxed, angry, etc. The pet can be used as a mindfulness/meditation aide—Danielle tries to get her pet to change colour to a certain state to match whatever mindfulness goals she is aiming towards), and/or a facial expression (Aimone: [0112], [0167], [0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). Therefore, in view of teachings by Read and Aimone, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read to include wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone. The motivation for this is to provide a feedback to a user based on conditions of the user. As a result, Applicant arguments are not deemed persuasive, and the previous rejections pertaining to the previous set of claims are sustained. Therefore, due to the claimed amendments, upon further consideration, a new ground of rejections necessitated by amendments is made in view of following reference/combinations. Information Disclosure Statement The information disclosure statements (IDS) submitted on 03/26/2026 is in compliance with the provision of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by Examiner. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. For example, claim 19 recites the method steps of “a communication method in which a reaction changes according to a state of an evaluation target person, the communication method comprising: acquiring, by an information acquisition unit, brainwave information of the evaluation target person; determining, by a reaction determination unit, a first reaction of a communication device, based on the brainwave information; and controlling, by a control unit, the communication device according to the first reaction determined in the determining the reaction, wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression.” The limitations of acquiring, by an information acquisition unit, brainwave information of the evaluation target person; determining, by a reaction determination unit, a first reaction of a communication device, based on the brainwave information; and controlling, by a control unit, the communication device according to the first reaction determined in the determining the reaction, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by an information acquisition unit,” “by a reaction determination unit,” and “by a control unit” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by an information acquisition unit,” “by a reaction determination unit,” and “by a control unit” language, “acquiring, by an information acquisition unit, brainwave information of the evaluation target person; determining, by a reaction determination unit, a first reaction of a communication device, based on the brainwave information; and controlling, by a control unit, the communication device” in the context of this claim encompasses the user manually receive a brainwave information of a target person and determine whether to control a communication device. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using an information acquisition unit, a reaction determination unit, and a control unit as a processor to perform the acquiring, the determining, and the controlling steps. The processor in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of the acquiring, the determining, and the controlling steps) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform all the acquiring, the determining, and the controlling steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claims 1 and 20 are rejected under 35 U.S.C. 101 based on the same analysis as claim 19 because the claimed invention is directed to an abstract idea without significantly more. For example, claims 1 recites the communication unit, and claim 20 recites the computer for performing the above method steps. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Read et al. (Read – US 2022/0225920 A1) in view of Aimone et al. (Aimone – US 2016/0077547 A1). As to claim 1, Read discloses a communication device in which a reaction changes according to a state of an evaluation target person, the communication device comprising: an information acquisition unit (Read: [0115]-[0118] and FIG. 1 the inputs ) which acquires brainwave information of the evaluation target person (Read: Abstract, [0114]-[0115], [0117], [0127], [0150]-[0152], [0155] , and FIG. 1 the inputs: The sensing module can be configured to detect, measure, record, quantify, and/or read one or more biological signals of a subject. The one or more biological signals can comprise, for example, brain waves or brain signals. The one or more biological signals can comprise an electrical signal and/or an oscillatory signal. The one or more biological signals can be represented as one or more EEG waves or waveforms (also referred to herein as brain waves or brain signals). The one or more biological signals can include an electroencephalogram (EEG) signal, an electromyogram (EMG) signal, an electrocorticogram (ECoG) signal, field potentials within a motor cortex or other regions of the brain, or combinations thereof); a reaction determination unit (Read: FIG. 1 the headband dual core processor) which determines a first reaction of the communication device, based on the brainwave information (Read: [0117], [0129]-[0130], [0133]-[0136], [0150]-[0152], [0155], [0164], [0167]-[0168], and FIG. 1: The one or more biological signals can correspond to a particular mental state of the subject. For example, in a first mental state, the subject may exhibit a first set of biological signals with a first set of characteristics, whereas in a second mental state, the subject may exhibit a second set of biological signals with a second set of characteristics. The characteristics associated with the biological signals may comprise, for example, a wavelength, a frequency, an amplitude, a phase, a center frequency, a phase difference, a variance, a co-variance, or any other physical property associated with the one or more biological signal); and a control unit (Read: FIG. 1 the headband dual core processor) which controls the communication device according to the first reaction determined by the reaction determination unit (Read: [0141]-[0142], [0151]-[0152], [0156], [0161]-[0162], and FIG. 1 the audio speaker output: The audio speaker volume output can be varied in proportion to a user's individualized maximum and minimum alpha signal levels. The user's maximum and minimum mean alpha biomarker levels can be set to values of, for example, 8 dB and 3 dB, respectively, as determined in a prior data calibration session. The maximum and minimum alpha biomarker levels can indicate maximal and minimal alertness levels, respectively. The user can practice alternately increasing and decreasing the audio speaker volume on 4 separate occasions (a, b, c and d) within a 1.5 hour session. To do so, the user simply focuses their attention on their forehead to increase alpha and as their attention relaxes audible feedback indicates relaxation). Read does not explicitly disclose wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression. However, it has been known in the art of monitoring condition of a user to implement wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone, which discloses wherein the first reaction is a simulated expression of a living person as a virtual person (Aimone: [0167], [0170], and FIG. 10-11: The EEG readings from other members of the group affects the appearance of their avatars. e.g. if the person is relaxed, their avatars can glow blue; if they are tense, their avatars would glow red. The individual participants can look at their own avatars to determine how they are proceeding with the meditation session; their own avatars will change colour like those in the rest of the group. Emotions can also be displayed on the avatars; i.e. the words “ANGRY” or “ANXIOUS” can appear on their faces), and wherein the simulated expression is at least one of an utterance, an action (Aimone: [0113], [0169], and FIG. 10-11: The device measures her brainwaves and changes the characteristics of the virtual pet accordingly to provide visual feedback. For example, the pet changes colour from green to red when Danielle is upset; it changes back from red to green when she enters a relaxed state. Alternatively, the pet can change its own behaviour: irritated, relaxed, angry, etc. The pet can be used as a mindfulness/meditation aide—Danielle tries to get her pet to change colour to a certain state to match whatever mindfulness goals she is aiming towards), and/or a facial expression (Aimone: [0112], [0167], [0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). Therefore, in view of teachings by Read and Aimone, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read to include wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone. The motivation for this is to provide a feedback to a user based on conditions of the user. As to claim 2, Read and Aimone disclose the limitations of claim 1 further comprising the communication device according to claim 1, wherein the reaction determination unit determines a content of the simulated expression of the communication device (Aimone: [0112]-[0113], [0167], [0169]-[0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.), based on the brainwave information (Aimone: [0024], [0028], [0057]-[0058], [0065]-[0066], [0169], and FIG. 10-11: The computing device 150 of the wearable device 105 is configured to create a VR environment on the stereoscopic display 110 and sound generator 140 for presentation to a user; receive bio-signal data of the user from the bio-signal sensors 120, at least one of the bio-signal sensors 120 comprising a brainwave sensor, and the received bio-signal data comprising at least brainwave data of the user; and determine brain state response elicited by the VR environment at least partly by determining a correspondence between the brainwave data and a predefined bio-signal measurement stored in a user profile, the predefined bio-signal measurement associated with predefined brain state response type), and the control unit controls the communication device according to the content of the simulated expression determined by the reaction determination unit (Aimone: [0112]-[0113], [0167], [0169]-[0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). As to claim 16, Read and Aimone disclose the limitations of claim 1 further comprising the communication device according to claim 1, wherein the reaction determination unit determines the first reaction (Read: [0141]-[0142], [0151]-[0152], [0156], [0161]-[0162], and FIG. 1 the audio speaker output: The audio speaker volume output can be varied in proportion to a user's individualized maximum and minimum alpha signal levels. The user's maximum and minimum mean alpha biomarker levels can be set to values of, for example, 8 dB and 3 dB, respectively, as determined in a prior data calibration session. The maximum and minimum alpha biomarker levels can indicate maximal and minimal alertness levels, respectively. The user can practice alternately increasing and decreasing the audio speaker volume on 4 separate occasions (a, b, c and d) within a 1.5 hour session. To do so, the user simply focuses their attention on their forehead to increase alpha and as their attention relaxes audible feedback indicates relaxation), based on a time at which the first reaction is determined (Aimone: [0024], [0028], [0057]-[0058], [0065]-[0066], [0169], and FIG. 10-11: The computing device 150 of the wearable device 105 is configured to create a VR environment on the stereoscopic display 110 and sound generator 140 for presentation to a user; receive bio-signal data of the user from the bio-signal sensors 120, at least one of the bio-signal sensors 120 comprising a brainwave sensor, and the received bio-signal data comprising at least brainwave data of the user; and determine brain state response elicited by the VR environment at least partly by determining a correspondence between the brainwave data and a predefined bio-signal measurement stored in a user profile, the predefined bio-signal measurement associated with predefined brain state response type), or an environment around the evaluation target person (Read: [0043], [0048], [0065], [0125]-[0126], [0140]-[0142], [0164]-[0167], and FIG. 1: the present disclosure provides a method for modulating brain states, comprising: (a) using (i) one or more sensors to detect at least one of a biological parameter of a subject and one or more biological signals of the subject and (ii) an additional sensor to detect one or more ambient conditions associated with a surrounding environment of the subject, wherein at least one of the one or more sensors is placed in contact with a portion of the subject's body; (b) processing the data obtained using the one or more sensors to compute one or more biomarkers for the subject; and (c) controlling an operation of one or more output devices, based on the one or more computed biomarkers and the data obtained using the additional sensor, to provide a stimulation to the subject to change a current state of the subject or to induce a desired state in the subject). As to claim 17, Read and Aimone disclose the limitations of claim 2 further comprising the communication device according to claim 2, wherein the reaction determination unit determines the first reaction (Read: [0141]-[0142], [0151]-[0152], [0156], [0161]-[0162], and FIG. 1 the audio speaker output: The audio speaker volume output can be varied in proportion to a user's individualized maximum and minimum alpha signal levels. The user's maximum and minimum mean alpha biomarker levels can be set to values of, for example, 8 dB and 3 dB, respectively, as determined in a prior data calibration session. The maximum and minimum alpha biomarker levels can indicate maximal and minimal alertness levels, respectively. The user can practice alternately increasing and decreasing the audio speaker volume on 4 separate occasions (a, b, c and d) within a 1.5 hour session. To do so, the user simply focuses their attention on their forehead to increase alpha and as their attention relaxes audible feedback indicates relaxation), based on a time at which the first reaction is determined (Aimone: [0024], [0028], [0057]-[0058], [0065]-[0066], [0169], and FIG. 10-11: The computing device 150 of the wearable device 105 is configured to create a VR environment on the stereoscopic display 110 and sound generator 140 for presentation to a user; receive bio-signal data of the user from the bio-signal sensors 120, at least one of the bio-signal sensors 120 comprising a brainwave sensor, and the received bio-signal data comprising at least brainwave data of the user; and determine brain state response elicited by the VR environment at least partly by determining a correspondence between the brainwave data and a predefined bio-signal measurement stored in a user profile, the predefined bio-signal measurement associated with predefined brain state response type), or an environment around the evaluation target person (Read: [0043], [0048], [0065], [0125]-[0126], [0140]-[0142], [0164]-[0167], and FIG. 1: the present disclosure provides a method for modulating brain states, comprising: (a) using (i) one or more sensors to detect at least one of a biological parameter of a subject and one or more biological signals of the subject and (ii) an additional sensor to detect one or more ambient conditions associated with a surrounding environment of the subject, wherein at least one of the one or more sensors is placed in contact with a portion of the subject's body; (b) processing the data obtained using the one or more sensors to compute one or more biomarkers for the subject; and (c) controlling an operation of one or more output devices, based on the one or more computed biomarkers and the data obtained using the additional sensor, to provide a stimulation to the subject to change a current state of the subject or to induce a desired state in the subject). As to claim 18, Read and Aimone disclose the limitations of claim 1 further comprising the communication device according to claim 1, wherein the control unit changes a living body which is presented to a reaction presentation unit according to the evaluation target person (Aimone: [0112]-[0113], [0167], [0169]-[0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). As to claim 19, Read discloses a communication method in which a reaction changes according to a state of an evaluation target person, the communication method comprising: acquiring, by an information acquisition unit (Read: [0115]-[0118] and FIG. 1 the inputs), brainwave information of the evaluation target person (Read: Abstract, [0114]-[0115], [0117], [0127], [0150]-[0152], [0155] , and FIG. 1 the inputs: The sensing module can be configured to detect, measure, record, quantify, and/or read one or more biological signals of a subject. The one or more biological signals can comprise, for example, brain waves or brain signals. The one or more biological signals can comprise an electrical signal and/or an oscillatory signal. The one or more biological signals can be represented as one or more EEG waves or waveforms (also referred to herein as brain waves or brain signals). The one or more biological signals can include an electroencephalogram (EEG) signal, an electromyogram (EMG) signal, an electrocorticogram (ECoG) signal, field potentials within a motor cortex or other regions of the brain, or combinations thereof); determining, by a reaction determination unit (Read: FIG. 1 the headband dual core processor), a first reaction of a communication device, based on the brainwave information (Read: [0117], [0129]-[0130], [0133]-[0136], [0150]-[0152], [0155], [0164], [0167]-[0168], and FIG. 1: The one or more biological signals can correspond to a particular mental state of the subject. For example, in a first mental state, the subject may exhibit a first set of biological signals with a first set of characteristics, whereas in a second mental state, the subject may exhibit a second set of biological signals with a second set of characteristics. The characteristics associated with the biological signals may comprise, for example, a wavelength, a frequency, an amplitude, a phase, a center frequency, a phase difference, a variance, a co-variance, or any other physical property associated with the one or more biological signal); and controlling, by a control unit (Read: FIG. 1 the headband dual core processor), the communication device according to the first reaction determined in the determining the reaction (Read: [0141]-[0142], [0151]-[0152], [0156], [0161]-[0162], and FIG. 1 the audio speaker output: The audio speaker volume output can be varied in proportion to a user's individualized maximum and minimum alpha signal levels. The user's maximum and minimum mean alpha biomarker levels can be set to values of, for example, 8 dB and 3 dB, respectively, as determined in a prior data calibration session. The maximum and minimum alpha biomarker levels can indicate maximal and minimal alertness levels, respectively. The user can practice alternately increasing and decreasing the audio speaker volume on 4 separate occasions (a, b, c and d) within a 1.5 hour session. To do so, the user simply focuses their attention on their forehead to increase alpha and as their attention relaxes audible feedback indicates relaxation). Read does not explicitly disclose wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression. However, it has been known in the art of monitoring condition of a user to implement wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone, which discloses wherein the first reaction is a simulated expression of a living person as a virtual person (Aimone: [0167], [0170], and FIG. 10-11: The EEG readings from other members of the group affects the appearance of their avatars. e.g. if the person is relaxed, their avatars can glow blue; if they are tense, their avatars would glow red. The individual participants can look at their own avatars to determine how they are proceeding with the meditation session; their own avatars will change colour like those in the rest of the group. Emotions can also be displayed on the avatars; i.e. the words “ANGRY” or “ANXIOUS” can appear on their faces), and wherein the simulated expression is at least one of an utterance, an action (Aimone: [0113], [0169], and FIG. 10-11: The device measures her brainwaves and changes the characteristics of the virtual pet accordingly to provide visual feedback. For example, the pet changes colour from green to red when Danielle is upset; it changes back from red to green when she enters a relaxed state. Alternatively, the pet can change its own behaviour: irritated, relaxed, angry, etc. The pet can be used as a mindfulness/meditation aide—Danielle tries to get her pet to change colour to a certain state to match whatever mindfulness goals she is aiming towards), and/or a facial expression (Aimone: [0112], [0167], [0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). Therefore, in view of teachings by Read and Aimone, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read to include wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone. The motivation for this is to provide a feedback to a user based on conditions of the user. As to claim 20, Read discloses a non-transitory computer-readable medium having recorded thereon a communication program that, when executed by a computer, causes the computer to perform operations comprising: acquiring brainwave information of an evaluation target person (Read: Abstract, [0114]-[0115], [0117], [0127], [0150]-[0152], [0155] , and FIG. 1 the inputs: The sensing module can be configured to detect, measure, record, quantify, and/or read one or more biological signals of a subject. The one or more biological signals can comprise, for example, brain waves or brain signals. The one or more biological signals can comprise an electrical signal and/or an oscillatory signal. The one or more biological signals can be represented as one or more EEG waves or waveforms (also referred to herein as brain waves or brain signals). The one or more biological signals can include an electroencephalogram (EEG) signal, an electromyogram (EMG) signal, an electrocorticogram (ECoG) signal, field potentials within a motor cortex or other regions of the brain, or combinations thereof); determining a first reaction of a communication device, based on the brainwave information (Read: [0117], [0129]-[0130], [0133]-[0136], [0150]-[0152], [0155], [0164], [0167]-[0168], and FIG. 1: The one or more biological signals can correspond to a particular mental state of the subject. For example, in a first mental state, the subject may exhibit a first set of biological signals with a first set of characteristics, whereas in a second mental state, the subject may exhibit a second set of biological signals with a second set of characteristics. The characteristics associated with the biological signals may comprise, for example, a wavelength, a frequency, an amplitude, a phase, a center frequency, a phase difference, a variance, a co-variance, or any other physical property associated with the one or more biological signal); and controlling the communication device according to the first reaction determined in the determining the reaction (Read: [0141]-[0142], [0151]-[0152], [0156], [0161]-[0162], and FIG. 1 the audio speaker output: The audio speaker volume output can be varied in proportion to a user's individualized maximum and minimum alpha signal levels. The user's maximum and minimum mean alpha biomarker levels can be set to values of, for example, 8 dB and 3 dB, respectively, as determined in a prior data calibration session. The maximum and minimum alpha biomarker levels can indicate maximal and minimal alertness levels, respectively. The user can practice alternately increasing and decreasing the audio speaker volume on 4 separate occasions (a, b, c and d) within a 1.5 hour session. To do so, the user simply focuses their attention on their forehead to increase alpha and as their attention relaxes audible feedback indicates relaxation). Read does not explicitly disclose wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression. However, it has been known in the art of monitoring condition of a user to implement wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone, which discloses wherein the first reaction is a simulated expression of a living person as a virtual person (Aimone: [0167], [0170], and FIG. 10-11: The EEG readings from other members of the group affects the appearance of their avatars. e.g. if the person is relaxed, their avatars can glow blue; if they are tense, their avatars would glow red. The individual participants can look at their own avatars to determine how they are proceeding with the meditation session; their own avatars will change colour like those in the rest of the group. Emotions can also be displayed on the avatars; i.e. the words “ANGRY” or “ANXIOUS” can appear on their faces), and wherein the simulated expression is at least one of an utterance, an action (Aimone: [0113], [0169], and FIG. 10-11: The device measures her brainwaves and changes the characteristics of the virtual pet accordingly to provide visual feedback. For example, the pet changes colour from green to red when Danielle is upset; it changes back from red to green when she enters a relaxed state. Alternatively, the pet can change its own behaviour: irritated, relaxed, angry, etc. The pet can be used as a mindfulness/meditation aide—Danielle tries to get her pet to change colour to a certain state to match whatever mindfulness goals she is aiming towards), and/or a facial expression (Aimone: [0112], [0167], [0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback.). Therefore, in view of teachings by Read and Aimone, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read to include wherein the first reaction is a simulated expression of a living person as a virtual person, and wherein the simulated expression is at least one of an utterance, an action, and/or a facial expression, as suggested by Aimone. The motivation for this is to provide a feedback to a user based on conditions of the user. Claims 3-6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Read et al. (Read – US 2022/0225920 A1) in view of Aimone et al. (Aimone – US 2016/0077547 A1) and further in view of Hwang et al. (Hwang – US 2016/0210407 A1). As to claim 3, Read and Aimone disclose the limitations of claim 1 except for the claimed limitations of the communication device according to claim 1, wherein the information acquisition unit further acquires biological information of the evaluation target person, and the reaction determination unit determines the first reaction, based on the brainwave information and the biological information. However, it has been known in the art of monitoring conditions of a user to implement the information acquisition unit further acquires biological information of the evaluation target person, and the reaction determination unit determines the first reaction, based on the brainwave information and the biological information, as suggested by Hwang, which discloses the information acquisition unit further acquires biological information of the evaluation target person (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), and the reaction determination unit determines the first reaction (Hwang: [0049]-[0050], [0055]-[0057], [0067]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 1-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude), based on the brainwave information and the biological information (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: The sensor 110 may acquire bio-signals from a user. The bio-signals may include brainwaves, pulses, an electrocardiogram, etc. If the bio-signals are brainwaves, the sensor 110 may acquire at least one selected from electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), and electrokardiogramm (EKG) signals. The sensor 110 may obtain the bio-signals by contacting the user' body and may come in different forms such as a headset, earphones, and a bracelet…the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses). Therefore, in view of teachings by Read, Aimone, and Hwang, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read and Aimone to include the information acquisition unit further acquires biological information of the evaluation target person, and the reaction determination unit determines the first reaction, based on the brainwave information and the biological information, as suggested by Hwang. The motivation for this is to provide audio stimulation to a user based on conditions of the user. As to claim 4, Read, Aimone, and Hwang disclose the limitations of claim 2 further comprising the communication device according to claim 2, wherein the information acquisition unit further acquires biological information of the evaluation target person (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), and the reaction determination unit determines the first reaction (Hwang: [0049]-[0050], [0055]-[0057], [0067]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 1-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude), based on the brainwave information and the biological information (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: The sensor 110 may acquire bio-signals from a user. The bio-signals may include brainwaves, pulses, an electrocardiogram, etc. If the bio-signals are brainwaves, the sensor 110 may acquire at least one selected from electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), and electrokardiogramm (EKG) signals. The sensor 110 may obtain the bio-signals by contacting the user' body and may come in different forms such as a headset, earphones, and a bracelet…the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses). As to claim 5, Read, Aimone, and Hwang disclose the limitations of claim 3 further comprising the communication device according to claim 3, wherein the information acquisition unit further acquires the biological information after the first reaction (Hwang: [0049]-[0050], [0055]-[0057], [0067]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 1-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude), the reaction determination unit determines a second reaction of the communication device, based on the brainwave information, and the biological information after the first reaction (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: The sensor 110 may acquire bio-signals from a user. The bio-signals may include brainwaves, pulses, an electrocardiogram, etc. If the bio-signals are brainwaves, the sensor 110 may acquire at least one selected from electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), and electrokardiogramm (EKG) signals. The sensor 110 may obtain the bio-signals by contacting the user' body and may come in different forms such as a headset, earphones, and a bracelet…the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), and the control unit controls the communication device according to the second reaction (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 2-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude). As to claim 6, Read, Aimone, and Hwang disclose the limitations of claim 5 further comprising the communication device according to claim 5, wherein the information acquisition unit acquires a change in the brainwave information from before to after the first reaction of the communication device (Hwang: [0081]-[0084], and FIG. 2-4: The device 100 may measure brainwave signals at short time intervals of 1 to 5 seconds, and process content according to the brainwave signals, based on adjustment sensitivity set by the user or determined according to a predetermined algorithm. That is, like in 320, the device 100 may determine a user's status via measurement of brainwaves and process the music currently being reproduced to output sounds 380 and 390 (330, 340, and 350)), and the reaction determination unit generates state information indicating a state of the evaluation target person (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), based on the change in the brainwave information, and the biological information (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: The sensor 110 may acquire bio-signals from a user. The bio-signals may include brainwaves, pulses, an electrocardiogram, etc. If the bio-signals are brainwaves, the sensor 110 may acquire at least one selected from electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), and electrokardiogramm (EKG) signals. The sensor 110 may obtain the bio-signals by contacting the user' body and may come in different forms such as a headset, earphones, and a bracelet…the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), and determines the second reaction based on the generated state information (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 2-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude). As to claim 12, Read, Aimone, and Hwang disclose the limitations of claim 5 further comprising the communication device according to claim 5, wherein when a state that is based on the brainwave information of the evaluation target person and that is a state after the first reaction is a predetermined state (Hwang: [0081]-[0084], and FIG. 2-4: The device 100 may measure brainwave signals at short time intervals of 1 to 5 seconds, and process content according to the brainwave signals, based on adjustment sensitivity set by the user or determined according to a predetermined algorithm. That is, like in 320, the device 100 may determine a user's status via measurement of brainwaves and process the music currently being reproduced to output sounds 380 and 390 (330, 340, and 350)), the reaction determination unit determines a predetermined reaction as the second reaction (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0081]-[0084], [0133]-[0136], and FIG. 2-3: When the user learns to recognize the content processed by the device 100, he or she may unconsciously change his or her state gradually to a relaxed state 360. As the user's status changes to the relaxed state 360, the device 100 may process the music currently being reproduced to output the rich sounds 390). Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Read et al. (Read – US 2022/0225920 A1) in view of Aimone et al. (Aimone – US 2016/0077547 A1) and Hwang et al. (Hwang – US 2016/0210407 A1) and further in view of Kwalwasser et al. (Kwalwasser – US 2023/0218221 A1). As to claim 13, Read, Aimone, and Hwang disclose the limitations of claim 6 further comprising the communication device according to claim 6, further comprising: a state that infers the state of the evaluation target person after the first reaction, by performing machine learning on a relationship between the first reaction and the change in the brainwave information (Read: [0048], [0063], [0065]-[0066], [0081]-[0083], [0091], [0113], [0124]-[0126], [0161]-[0164], and FIG. 1: if a subject has difficulty falling asleep or experienced restless sleep, and the ambient sensors detect that there was ambient noise and ambient light above a certain threshold, or that the room temperature was too hot or cold, the ambient sensor data obtained using the ambient sensors can provide feedback to the subject or the processing module (described in greater detail below). In some cases, the feedback may comprise a notification to the subject to let the subject know that he or she was restless last night, and that such restlessness may be due to too much ambient light or too much noise in the room at a certain time. In some cases, the feedback may further comprise one or more suggestions to the subject (e.g., a suggestion for the subject to try using an eye mask), Aimone: [0112]-[0113], [0167], [0169]-[0170], and FIG. 10-11: The usage of the facial sensors (an example of bio-signal sensors of wearable device 1002, 1004) may allow for the mapping of a user's expression to the face of their avatar 1010, 1012 in a VR environment (e.g. to represent detected facial states with associated smiles, squints, winks, furrows, frowns, etc.). This can be augmented with brain signals from wearable device 1002, 1004 to do emotion estimation by device 1008, 1006. This estimate can further augment a characters appearance in the VR environment as an example of feedback, and Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: The sensor 110 may acquire bio-signals from a user. The bio-signals may include brainwaves, pulses, an electrocardiogram, etc. If the bio-signals are brainwaves, the sensor 110 may acquire at least one selected from electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), and electrokardiogramm (EKG) signals. The sensor 110 may obtain the bio-signals by contacting the user' body and may come in different forms such as a headset, earphones, and a bracelet…the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), based on the brainwave information before the first reaction and the first reaction (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 2-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude). The combination of Read, Aimone, and Hwang does not explicitly disclose a state learning unit which generates a state inference model that infers the state of the evaluation target person after the first reaction, by performing machine learning on a relationship between the first reaction and the change in the brainwave information. However, it has been known in the art of monitoring conditions of a user to implement a state learning unit which generates a state inference model that infers the state of the evaluation target person after the first reaction, by performing machine learning on a relationship between the first reaction and the change in the brainwave information, as suggested by Kwalwasser, which discloses a state learning unit which generates a state inference model that infers the state of the evaluation target person after the first reaction, by performing machine learning on a relationship between the first reaction and the change in the brainwave information (Kwalwasser: Abstract, [0021]-[0022], [0025], [0028], [0030]-[0033], [0046]-[0049], and FIG. 1-3: The brain state model generator 160 trains one or more brain state models based on the extracted features from the brain activity signal and the user survey responses. In one embodiment, the brain state models are random forest regression models. In other embodiments, the brain state models utilize different machine learning techniques, e.g., neural networks, multinomial regressors, other decision trees, etc. The trained brain state models are configured to predict a value for the brain state (or the brain state value over time) based on an input brain activity signal. The brain state models may be stored in the data store 180. For a given user, the server 150 may select a brain state model that best fits the user's survey responses and brain activity data. The best fit model provides the closest prediction of the user's brain state value based on the brain activity data. The selected brain state model may be stored in a user profile for that user, such that the server 150 may provide tailored content to each user). Therefore, in view of teachings by Read, Aimone, Hwang, and Kwalwasser it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the biological signals processing system of Read, Aimone, and Hwang to include a state learning unit which generates a state inference model that infers the state of the evaluation target person after the first reaction, by performing machine learning on a relationship between the first reaction and the change in the brainwave information, as suggested by Kwalwasser. The motivation for this is to provide stimulations to a user based on conditions of the user. As to claim 14, Read, Aimone, Hwang, and Kwalwasser disclose the limitations of claim 13 further comprising the communication device according to claim 13, wherein the information acquisition unit acquires the brainwave information after the first reaction of the communication device (Hwang: [0081]-[0084], and FIG. 2-4: The device 100 may measure brainwave signals at short time intervals of 1 to 5 seconds, and process content according to the brainwave signals, based on adjustment sensitivity set by the user or determined according to a predetermined algorithm. That is, like in 320, the device 100 may determine a user's status via measurement of brainwaves and process the music currently being reproduced to output sounds 380 and 390 (330, 340, and 350)), and the reaction determination unit determines the second reaction of the communication device, based on the brainwave information after the first reaction, and the state of the evaluation target person inferred by the state inference model (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 2-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude). As to claim 15, Read, Aimone, Hwang, and Kwalwasser disclose the limitations of claim 13 further comprising the communication device according to claim 13, wherein when a difference between the state of the evaluation target person based on the brainwave information after the first reaction (Hwang: Abstract, [0026]-[0028], [0050]-[0055], [0060]-[0061], and FIG. 2-4: the device 100 may acquire bio-signals of a user via the sensor 110 (S201). The bio-signals may be signals that may be used to detect a user's status such as brainwaves, the amount of oxygen in cerebral blood flow, and pulses), and the state of the evaluation target person inferred by the state inference model (Kwalwasser: Abstract, [0021]-[0022], [0025], [0028], [0030]-[0033], [0046]-[0049], and FIG. 1-3: The brain state model generator 160 trains one or more brain state models based on the extracted features from the brain activity signal and the user survey responses. In one embodiment, the brain state models are random forest regression models. In other embodiments, the brain state models utilize different machine learning techniques, e.g., neural networks, multinomial regressors, other decision trees, etc. The trained brain state models are configured to predict a value for the brain state (or the brain state value over time) based on an input brain activity signal. The brain state models may be stored in the data store 180. For a given user, the server 150 may select a brain state model that best fits the user's survey responses and brain activity data. The best fit model provides the closest prediction of the user's brain state value based on the brain activity data. The selected brain state model may be stored in a user profile for that user, such that the server 150 may provide tailored content to each user), exceeds a predetermined threshold value (Read: [0124], [0139], [0145]-[0148], [0152], [0157], and FIG. 1: In some alternative embodiments, similar processes can be performed in parallel on separate spectral bands in order to generate an instantaneous biomarker such as a theta/alpha oscillatory frequency ratio. When the theta/alpha oscillatory frequency ratio crosses an arbitrary, predetermined threshold or a user defined decision variable (DV) level, the processing module can be configured to recognize this event as a trigger or a switch to modulate an audio speaker output or to turn the audio speaker output on or off), the reaction determination unit determines a predetermined reaction as the second reaction (Hwang: [0049]-[0050], [0057], [0067], [0075]-[0079], [0082]-[0083], [0133]-[0136], and FIG. 2-3: the device 100 may output at least one object as an audio and speech signal or vibration signal in order to apply an auditory or tactile stimulus to the user. Each object may be output as an audio and speech signal or vibration signal that can be recognized by the user. When an object corresponding to a user's desired task is output, a concentration level or excitation level derived from a brainwave signal may be increased. Thus, an ERP signal having a larger magnitude may be detected when an object corresponding to a user's desired task is output than when another object is output. The device 100 may select a user's desired task by selecting an object corresponding to a time point when the ERP signal has a relatively large magnitude compared to another magnitude). Allowable Subject Matter Claims 7-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not teach the combination of the limitations including wherein the reaction determination unit generates the state information, based on a change from a ratio of an amplitude of a brainwave in a predetermined frequency band to a total amplitude in the brainwave information before the first reaction, to a ratio of an amplitude of a brainwave in the frequency band to the total amplitude in the brainwave information after the first reaction; and a ratio of a magnitude of a first power spectrum to a magnitude of a second power spectrum in a heart rate of the evaluation target person, the total amplitude is a sum of amplitudes of an alpha wave, a beta wave, a theta wave, a gamma wave, and a delta wave, and a frequency band of the second power spectrum is a band in which a frequency is higher than that in a frequency band of the first power spectrum, as presented in claim 7. Although many of the limitations of the claims can be individually found in the prior art, there is no reasonable combination of references sufficient to teach the invention as claimed in claim 7. Citation of Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: McTernan et al., US 2023/0326092 A1, discloses real-time visualization of head mounted display user reactions. Lee, US 2022/0067376 A1, discloses method for generating highlight image using biometric data and device therefor. Cavalli et al., US 2021/0338177 A1, discloses visualized virtual agent. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUANG PHAM whose telephone number is (571)-270-3668. The examiner can normally be reached 09:00 AM - 05:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, QUAN-ZHEN WANG can be reached at (571)-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUANG PHAM/Primary Examiner, Art Unit 2685
Read full office action

Prosecution Timeline

Jun 16, 2024
Application Filed
Sep 13, 2025
Non-Final Rejection — §101, §103
Dec 11, 2025
Response Filed
Dec 23, 2025
Final Rejection — §101, §103
Mar 27, 2026
Request for Continued Examination
Mar 30, 2026
Response after Non-Final Action
Apr 06, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604168
Emergency Management System and Method
2y 5m to grant Granted Apr 14, 2026
Patent 12594879
EMERGENCY VEHICLE LIGHTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12592103
SYSTEM AND METHOD FOR COMMUNICATING DRIVING INTENT OF AN AUTONOMOUS VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12546150
DOOR ASSEMBLY FOR MOTOR VEHICLE
2y 5m to grant Granted Feb 10, 2026
Patent 12546146
CONTROL METHOD FOR VEHICLE DOOR AND APPARATUS VEHICLE AND COMPUTER STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+57.3%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 699 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month