DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-20 were previously pending in this application. The amendment filed 17 September 2025 has been entered and the following has occurred: Claims 1, 12, & 19 have been amended. No claims have been cancelled or added.
Claims 1-20 remain pending in the application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claims recite subject matter within a statutory category as a process (claims 12-18), machine (claims 1-11 & 19-20) which recite steps of:
receive, from at least one electronic device of the plurality of electronic devices, a first signal associated with a first physiological response of the user;
determine, based on the first signal, an estimate of a state of the user; and
cause, based on the estimate of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user.
These steps of receiving a first signal associated with a physiological response of the user, determining, based on the signal, an estimate of a state of het user, and causing, based on the estimate of the state of the user, presentation of a personalized psychophysiological stimuli, as drafted, under the broadest reasonable interpretation, includes performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the receiving a first signal associated with a physiological response of the user language, receiving a signal in the context of this claim encompasses a mental process of a person or entity either collecting physiological response data from the user, such as via observation, or receiving the data from a medium that the person or entity is using. Similarly, the limitation of determining an estimate of a state of the user based on the collected signal, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, such as a person or entity diagnosing the user based on the collected physiological response data. For example, but for the causing presentation of representation of a personalized psychophysiological stimuli language, presentation of stimuli in the context of this claim encompasses a mental process of a human or entity choosing content to output to a user given the user’s psychological or physiological state. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2-11, 13-18, & 20, reciting particular aspects of how the signal may be collected or how the representation may be outputted may be performed in the mind but for recitation of generic computer components).
This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which:
amount to mere instructions to apply an exception (such as recitation of a one or more electronic devices, a processor, a memory, a physiological sensor, a smart watch, and a smart phone amounts to invoking computers as a tool to perform the abstract idea, see Applicant’s Specification [0007] for one or more electronic devices, [0054] for a processor, [0048] for a memory, [0014] for a physiological sensor, [0014] for a smart watch, [0042] for a smart phone, see MPEP 2106.05(f));
add insignificant extra-solution activity to the abstract idea (such as recitation of receiving, from at least one electronic device of the plurality of electronic devices amounts to mere data gathering, recitation of determine, based on the first signal, an estimate of a state of the state of the user amounts to selecting a particular data source or type of data to be manipulated, recitation of cause, based on the estimate of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user amounts to insignificant application, see MPEP 2106.05(g));
generally link the abstract idea to a particular technological environment or field of use (such as recitation of a biocybernetics adaptation and biofeedback, see MPEP 2106.05(h)).
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-11, 13-18, & 20, which recite one or more electronic devices, a vehicle, an electronic gaming system, virtual reality display, additional limitations which amount to invoking computers as a tool to perform the abstract idea; claims 2, 6, 12, & 16, which recite limitations relating to sending a signal with the estimate of the state of the user, generating a signal associated with a physiological state experienced by the user, additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering; claims 3, 10, 13, 18, & 20, which recite limitations relating to generating or estimating the determined state and generating one or more feedbacks additional limitations which add insignificant extra-solution activity to the abstract idea by selecting a particular data source or type of data to be manipulated, claims 2-11, 13-18, & 20, which recite limitations relating to a particular cognitive state, emotional state, electronic systems, and/or devices to be used such as to display or present content, i.e. additional limitations which generally link the abstract idea to a particular technological environment or field of use). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields (such as receiving, from at least one electronic device of the plurality of electronic devices, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); determine, based on the first signal, an estimate of a state of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); maintain one or more states of the user and associated electronic representations of personalized psychophysiological stimuli, e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); storing computerized instructions for performance of the steps recited, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv)).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-11, 13-18, & 20, additional limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, claims 2, 6, 12, & 16, which recite limitations relating to sending a signal with the estimate of the state of the user, generating a signal associated with a physiological state experienced by the user, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); claims 3, 10, 13, 18, & 20, which recite limitations relating to generating or estimating the determined state and generating one or more feedbacks, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); claims 2-10, 13-18, & 20, which recites limitations relating to storing computerized instructions for performance of the steps recited, storing one or more psychophysiological states of the user, etc., e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv); claims 2-11, 13-18, & 20, which recite limitations relating to a particular cognitive state, emotional state, electronic systems, and/or devices to be used such as to display or present content, gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, MPEP 2106.05(a)(II)(iii)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ashgar et al. (U.S. Patent Publication No. 2023/0106673), hereinafter “Ashgar”
Claim 1 –
Regarding Claim 1, Ashgar discloses a biocybernetics adaptation and biofeedback training system comprising:
a plurality of electronic devices associated with a user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, i.e. physiological response of the user, thereby associated with the user), wherein
each electronic device of the plurality of electronic devices generate a signal associated with a physiological response of the user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, i.e. physiological responses of the user, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant);
a computing device comprising:
at least one processor (See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor);
memory storing instructions that, when executed by the at least one processor, cause the computing device (See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar) to:
receive, from at least one electronic device of the plurality of electronic devices, a first signal associated with a first physiological response of the user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices);
determine, based on the first signal, an estimate of a state of the user (See Ashgar Par [0010]-[0013] which discloses determining, i.e. determining an estimate, of a state of the user by receiving data associated with the one or more sensors of the vehicle and/or health sensors of the user; Se Ashgar Par [0147] which discloses monitoring an occupant associated with a mobile device and detecting a state of the occupant of a vehicle based on various aspects of occupant); and
cause, based on the estimate of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system; See Ashgar Par [0174]-[0175] which discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant, which is understood by Examiner to constitute a “personalized” stimuli since the content presented is based on the state of the occupant).
Claim 2 –
Regarding Claim 2, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the state of the user comprises one or both of a cognitive state and emotional state (See Ashgar Par [0148] which discloses the state of the user comprising one or more cognitive states and/or emotional states).
Claim 3 –
Regarding Claim 3, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the instructions further cause the computing device to send, to the second electronic device embedded in the environment local to the user, a signal associated with the estimate of the state of the user (See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; While not “embedded” per se, see Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices. such that these systems or devices are understood to be “embedded”; See Ashgar Par [0125] which discloses a vehicle computing system that includes any of the sensor data and/or data generated based on the sensor data, such that a description of information in the sensor data can be generated by the vehicle computing system).
Claim 4 –
Regarding Claim 4, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the plurality of electronic devices associated with the user comprises a smart phone (See Ashgar Par [0024] & [0073] which discloses one or more sensor systems and/or devices, such as a smart phone, smart watch, or the like, for collecting and/or outputting data/content associated with a user).
Claim 5 –
Regarding Claim 5, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the plurality of electronic devices associated with the user comprises a one or more physiological sensor devices (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices; See Ashgar Par [0125] which discloses a vehicle computing system that includes any of the sensor data and/or data generated based on the sensor data, such that a description of information in the sensor data can be generated by the vehicle computing system).
Claim 6 –
Regarding Claim 6, Ashgar discloses the system of claim 5 in its entirety. Ashgar further discloses a system, wherein:
at least one physiological sensor device generates a signal associated with a physiological state experienced by the user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices; See Ashgar Par [0125] which discloses a vehicle computing system that includes any of the sensor data and/or data generated based on the sensor data, such that a description of information in the sensor data can be generated by the vehicle computing system).
Claim 7 –
Regarding Claim 7, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the environment local to the user comprises a vehicle interior (See Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0004] which discloses the method including one or more images of an interior portion of a vehicle, such as to determine a state of an occupant of the vehicle; See Ashgar Par [0175] which discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant).
Claim 8 –
Regarding Claim 8, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the environment local to the user comprises a portion of an electronic gaming system (See Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0045] & [0048] which discloses one or more AR or VR systems or content being used for gaming, entertainment, and/or other applications, i.e. gaming system; See Ashgar Par [0160] which discloses the content filtering engine filtering any virtual content distracting the occupant or determined to distract the occupant, e.g. based on the state of the occupant, such that the content filtering engine can determine various video game content to output or attenuate for the occupant(s)).
Claim 9 –
Regarding Claim 9, Ashgar discloses the system of claim 1 in its entirety. Asghar further discloses a system, wherein:
the second electronic device embedded in the environment local to the user comprises a virtual reality display (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system).
Claim 10 –
Regarding Claim 10, Ashgar discloses the system of claim 1 in its entirety. Ashgar further discloses a system, wherein:
the instructions further cause the computing device to cause the virtual reality display to display a visual representation of a locality external to the environment local to the user (Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; See Ashgar Par [0171] which discloses a portion of an environment outside of the vehicle are captured by one or more image sensors of the sensor system; See Ashgar Par [0158] which discloses the content filtering engine filtering/blocking certain content that may distract the occupants including a certain region outside of the vehicle that the content filtering engine determines should remain visible to the occupant or replace virtual content rendered by the mobile device with live content that may not obstruct a view of the occupant to a vehicle event or an environment outside of the vehicle).
Claim 11 –
Regarding Claim 11, Ashgar discloses the system of claim 10 in its entirety. Ashgar further discloses a system, wherein:
the environment local to the user comprises an interior space of a vehicle and the locality external to the environment local to the user comprises a space external to the vehicle (See Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0004] which discloses the method including one or more images of an interior portion of a vehicle, such as to determine a state of an occupant of the vehicle; See Ashgar Par [0175] which discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant; See Ashgar Par [0171] which discloses a portion of an environment outside of the vehicle are captured by one or more image sensors of the sensor system; See Ashgar Par [0158] which discloses the content filtering engine filtering/blocking certain content that may distract the occupants including a certain region outside of the vehicle that the content filtering engine determines should remain visible to the occupant or replace virtual content rendered by the mobile device with live content that may not obstruct a view of the occupant to a vehicle event or an environment outside of the vehicle).
Claim 12 –
Regarding Claim 12, Ashgar discloses a method comprising:
receiving, from at least electronic device of a plurality of electronic devices associated with a user, a first signal associated with a first physiological response of the user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices);
determining, based on the first signal, an estimate of a state of the user comprising one or both of a cognitive state of the user and an emotional state of the user (See Ashgar Par [0010]-[0013] which discloses determining, i.e. determining an estimate, of a state of the user by receiving data associated with the one or more sensors of the vehicle and/or health sensors of the user; Se Ashgar Par [0147] which discloses monitoring an occupant associated with a mobile device and detecting a state of the occupant of a vehicle based on various aspects of occupant); and
causing, based on the estimate of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system; See Ashgar Par [0174]-[0175]which discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant, which is understood by Examiner to constitute a “personalized” stimuli since the content presented is based on the state of the occupant).
Claim 13 –
Regarding Claim 13, Ashgar discloses the method of claim 12 in its entirety. Ashgar further discloses a method, further comprising:
sending, to the second electronic device embedded in the environment local to the user, a signal associated with the estimate of the state of the user (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system).
Claim 14 –
Regarding Claim 14, Ashgar discloses the method of claim 12 in its entirety. Ashgar further discloses a method, wherein:
the plurality of electronic devices associated with the user comprises a smart phone (See Ashgar Par [0024] & [0073] which discloses one or more sensor systems and/or devices, such as a smart phone, smart watch, or the like, for collecting and/or outputting data/content associated with a user).
Claim 15 –
Regarding Claim 15, Ashgar discloses the method of claim 13 in its entirety. Ashgar further discloses a method, wherein:
the plurality of electronic devices associated with the user comprises at least one physiological sensor device (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices; See Ashgar Par [0125] which discloses a vehicle computing system that includes any of the sensor data and/or data generated based on the sensor data, such that a description of information in the sensor data can be generated by the vehicle computing system).
Claim 16 –
Regarding Claim 16, Ashgar discloses the method of claim 15 in its entirety. Ashgar further discloses a method, wherein:
the at least one physiological sensor device generates a signal associated with a physiological state experienced by the user (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices; See Ashgar Par [0125] which discloses a vehicle computing system that includes any of the sensor data and/or data generated based on the sensor data, such that a description of information in the sensor data can be generated by the vehicle computing system).
Claim 17 –
Regarding Claim 17, Ashgar discloses the method of claim 12 in its entirety. Ashgar further discloses a method, wherein:
the second electronic device embedded in an environment local to the user comprises a virtual reality device associated with a gaming system (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system; See Ashgar Par [0045] & [0048] which discloses one or more AR or VR systems or content being used for gaming, entertainment, and/or other applications, i.e. gaming system; See Ashgar Par [0160] which discloses the content filtering engine filtering any virtual content distracting the occupant or determined to distract the occupant, e.g. based on the state of the occupant, such that the content filtering engine can determine various video game content to output or attenuate for the occupant(s)).
Claim 18 –
Regarding Claim 18, Ashgar discloses the method of claim 17 in its entirety. Ashgar further discloses a method, wherein:
the virtual reality device generates one or more of visual feedback, haptic feedback, and audio feedback (see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0101] which discloses one or more haptic feedback devices or other output devices being controlled by the vehicle computing system).
Claim 19 –
Regarding Claim 19, Ashgar discloses a computing device comprising:
at least one processor (See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor);
memory storing instructions that, when executed by the at least one processor, cause the computing device (See Ashgar Par [0110] which discloses the vehicle computing system including at least one processor and instructions stored in or one at least one memory to perform one or more of the functions of operations described throughout Ashgar) to:
receive, from at least one electronic device of a plurality of electronic devices, a first signal associated with a first physiological response of a user, wherein the plurality of electronic devices comprises one or more of a physiological sensor, a smart watch, and a smart phone (See Ashgar Par [0013] which discloses one or more wearable devices and/or mobile devices that record one or more health measurements, including heart rate, blood pressure, body temperature, galvanic skin response, a measurement of an electric signal from a heart of the occupant, a measurement of electrical activity of a brain of the occupant, etc. in order to determine a state of the occupant; See Ashgar Par [0121] which discloses a vehicle computing system obtaining, i.e. via sending or transmission, sensor data from the sensor systems on the vehicle or attached to the one or more occupants/mobile devices; See Ashgar Par [0024] & [0073] which discloses one or more sensor systems and/or devices, such as a smart phone, smart watch, or the like, for collecting and/or outputting data/content associated with a user);
determine, based on the first signal, an estimate of a state of the user (See Ashgar Par [0010]-[0013] which discloses determining, i.e. determining an estimate, of a state of the user by receiving data associated with the one or more sensors of the vehicle and/or health sensors of the user; Se Ashgar Par [0147] which discloses monitoring an occupant associated with a mobile device and detecting a state of the occupant of a vehicle based on various aspects of occupant); and
cause, based on the estimate of the state of the user, presentation of an electronic representation of a personalized psychophysiological stimuli via a second electronic device embedded in an environment local to the user (While not “embedded” per se, see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences, and is thereby understood to constitute “embedding” said system; See Ashgar Par [0174]-[0175] which discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant, which is understood by Examiner to constitute a “personalized” stimuli since the content presented is based on the state of the occupant).
Claim 20 –
Regarding Claim 20, Ashgar discloses the computing device of claim 19 in its entirety. Ashgar further discloses a computing device, wherein:
the second electronic device embedded in an environment local to the user comprises a virtual reality device associated with a gaming system (See Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0045] & [0048] which discloses one or more AR or VR systems or content being used for gaming, entertainment, and/or other applications, i.e. gaming system; See Ashgar Par [0160] which discloses the content filtering engine filtering any virtual content distracting the occupant or determined to distract the occupant, e.g. based on the state of the occupant, such that the content filtering engine can determine various video game content to output or attenuate for the occupant(s)) and the virtual reality device generates one or more of visual feedback, haptic feedback, and audio feedback (see Ashgar Par [0044] which discloses one or more virtual reality systems facilitating interactions with VR environments and possibly combining real-world or physical environments and virtual environments to provide users with XR or VR experiences; See Ashgar Par [0101] which discloses one or more haptic feedback devices or other output devices being controlled by the vehicle computing system).
Response to Arguments
Applicant's arguments filed 17 September 2025 have been fully considered but they are not persuasive:
Regarding 35 U.S.C. 101 rejections of claims 1-20, Applicant argues on p. 6-7 of Arguments/Remarks that the generalization of the presentation of a psychophysiological stimuli on a second electronic device embedded in an environment local to the user is also “a mental process of a human or entity choosing content to output to a user the user’s psychological or physiological state” ignores limitation in the claim including the physical electronic device generating the signals associated with a physiological response of the user. And as such, the physical structures and steps recited in the claims do not describe an abstract concept or a mental process. Examiner respectfully disagrees with Applicant’s arguments. The physical structures and/or steps that fall outside of being reasonably performed in the mind are/were considered under step 2A/2B of the Alice/Mayo framework for determining patent-eligibility as additional elements. Each of these limitations were determined to represent insignificant, extra-solution activity and/or well-understood, routine, conventional activity in prior art systems. While Applicant states that it’s not clear how the human mind can generate electronic signals based on physiological responses of a user or how the human mind could process signals generated by electronic devices, MPEP 2106.04(a)(2)(III)(C) states that performing a mental process using a generic computer or simply using a computer as a tool to perform a mental process still constitutes the mental process itself. For instance, Examiner generally concedes that these aspects may not be reasonably performed in the human mind itself, but the general electronic signals and/or signals generated by electronic devices can easily be processed by generic computers, especially given the generic nature of the aspects described, and in view of the above-mentioned MPEP 2106.04(a)(2)(III)(C), would still constitute the characterized abstraction itself. While Applicant likens the example (iii) provided in MPEP 2106.04(a)(1) regarding an “earring comprising a sensor for taking period blood glucose measurements and a memory for storing measurement data from the sensor”, there are additional steps recited in the pending claims that amount to an abstract idea. That is, while there may be additional elements in the claims that may not represent abstract concepts, there are additional steps that fall outside of the specific example of “an earring comprising a sensor for taking period blood glucose measurements and a memory for storing measurement data from the sensor”. Additionally, the example provided does not recite abstract steps, versus simple structural elements that are recited for an intended purpose of performing various steps, versus the instantly pending claims which recite standalone abstract concepts. As such, claims 1-20 remain rejected under 35 U.S.C. 101.
Regarding 35 U.S.C. 102 rejections of claims 1-20, Applicant argues on p. 8 of Arguments/Remarks that independent claims 1, 12, & 19 has been amended to include “… presentation of an electronic representation of a personalized psychophysiological stimuli…”, and therefore overcomes previous 35 U.S.C. 102 rejections made in view of Ashgar, because Ashgar does not disclose the psychophysiological stimuli that is presented being “personalized”. Examiner agrees with Applicant’s arguments. Therefore, the previous 35 U.S.C. 102 rejection has been withdrawn. However, upon further consideration, a new ground of rejection has been made under 35 U.S.C. 102 in view of newly reasoned portions of Ashgar. For instance, Ashgar Par [0174]-[0175] discloses an AR application modulating virtual content, i.e. psychophysiological stimuli under BRI, by controlling virtual content being presented, when the content is presented, where the content is presented, and one or more characteristics of the virtual content presented, based on the state of the occupant, which is understood by Examiner to constitute a “personalized” stimuli since the content presented is based on the state of the occupant. Therefore, it is determined that Ashgar still reads on the amended limitations found in independent claims 1, 12, & 19. As such, claims 1-20 remain rejected under 35 U.S.C. 102.
Regarding 35 U.S.C. 102 rejections of claims 1-20, Applicant argues on p. 8 of Arguments/Remarks that claims dependent from purportedly allowable independent claims 1, 12, & 19 are also purportedly allowable over the prior art by virtue of dependency. Examiner respectfully disagrees with Applicant’s arguments. As discussed above, Ashgar still reads on the amended limitations found in independent claims 1, 12, & 19. As such, Applicant’s arguments regarding dependent claims being purportedly allowable since independent claims 1, 12, & 19 are allowable are rendered moot, because independent claims 1, 12, & 19 remain rejected under 35 U.S.C. 102. As such, claims 1-20 remain rejected under 35 U.S.C. 102.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
Herz et al. (U.S. Patent Publication No. 2022/0224963) discloses a system for personalizing a vehicle with a sensory output device include receiving, by one or more processors, a signal indicating an identity or passenger profile of a detected passenger in or boarding the vehicle, accessing preference data and geographic location data for the passenger, and selecting sensory content for delivery to the passenger in the vehicle based on the preference data and geographic location data;
Mirjalili et al. (U.S. Patent Publication No. 2023/0051444) discloses a system for determining anatomical relationships between eye movement and mental state, the processing module estimates characteristics of the individual such as fatigue, intoxication, injury, or a medical condition that have known effects on eye movement patterns, such that the system generates an output indicative of the estimated mental state to alert the individual to the detected condition or to initiate an automated action;
Chandupatla et al. (U.S. Patent Publication No. 2021/0318135) discloses a system for obtaining one or more features of the route, augmenting a map of the route with the one or more features to generate an augmented map, and displaying the augmented map on a windshield of the vehicle.
Applicant's amendment necessitated the new ground of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNTER J RASNIC whose telephone number is (571)270-5801. The examiner can normally be reached M-F 8am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.R./Examiner, Art Unit 3684
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684