Prosecution Insights
Last updated: April 19, 2026
Application No. 18/262,746

DEVICE AND METHOD FOR MODIFYING AN EMOTIONAL STATE OF A USER

Non-Final OA §101§102§103§112
Filed
Jul 25, 2023
Examiner
CASLER, BRIAN L
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Aphelior
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
4y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
21 granted / 29 resolved
+2.4% vs TC avg
Strong +23% interview lift
Without
With
+22.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
32 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
36.3%
-3.7% vs TC avg
§102
25.3%
-14.7% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a module for determining”, “an automatic selector”, and “ a secondary selector” in claim 1 , “a collector of sound file” and “ a sound file classifier” in claim 2 . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1 , line 16, “the sound file selected manually” lacks antecedent basis. Perhaps “a sound file” in line 15 selected by the secondary selector should refer to – a manually selected sound file--. Regarding claim 2 , line 6, “that sound file” lacks antecedent basis. It is unclear to which sound file “that sound file” refers from claim 1. Regarding claim 7 , line 2, “at least one sound file” is unclear if the intent is to introduce another sound file different than the two introduced in claim 1 and in line 4, “each said sound file” is unclear as to which sound file of the multiple sound files introduced is intended. Regarding claim s 9 and 10 , the claims seem to set forth additional “sound files” and “sound file sequence” and it is unclear if they are in addition to the sound files and sound file sequence already set forth in the independent claim 1. Regrading claim 11 , the claim sets forth additional “sound files” and it is unclear if they are in addition to the sound files already set forth in the independent claim 1. Also, “ the duration, mode, tonality, quantification of the beat and the tempo of the sound file ” lacks antecedent basis. Regrading claim 1 2, line 2, “the sound file sequence” is unclear to which “sound file sequence it refers, the sound file sequence set forth in claim 10 or the sound file sequence also set forth in claim 1. Regrading claim 1 5, line 5, “ the real-time reading of electroencephalographic signals ” lacks antecedent basis. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to an abstract idea without significantly more. With Respect to claims 1 and 15 the claims recite the following limitation(s): Claim 1: a module for determining an emotional state based on an electroencephalographic signal read; - a means for determining a target emotional state; - an automatic selector of a sequence of at least one sound file, from a previously assembled list of sound files, based on the target emotional state determined, the electroencephalographic signal read, and at least one parameter associated to each said sound file; and - an electroacoustic transducer configured to play the selected sequence of sound files ; characterized in that the device also comprises: - a secondary selector of a sound file; and - a means for updating the sequence based on the sound file selected manually by the secondary selector. Claim 15 : a step of determining a target emotional state; and then, iteratively, at least one portion of the following steps: -a step of the real-time reading of electroencephalographic signals; - a step of determining an emotional state based on an electroencephalographic signal read; - a step of automatically selecting a sequence of at least one sound file, from a previously assembled list of sound files, based on the target emotional state determined, the electroencephalographic signal read, and at least one parameter associated to each said sound file; - a step of an electroacoustic transducer playing the selected sequence of sound files ; characterized in that the method also comprises: - a step of a secondary selection of a sound file; and - a step of updating the sequence based on the sound file selected manually by the secondary selector. Step 1 - Claims 1 and 15 are directed to a device and method for modifying an emotional state of a user . Step 2a Prong 1 – The claimed invention is directed to non-statutory subject matter. The above limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” /”Managing Personal Behavior” and “ M ental Processes” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II) (III) , in that they recite managing personal behavior and mental processes in a device and a series of steps to modify an emotional state of a user . When given their BRI, the limitations are considered and abstract idea of being certain methods of human activity . With respect to claim 1 , The device is directed to reading or observing EEG signals or data, a computer implanted module as disclosed for determining the emotional state of a user which could be done by hand or in the mind to observe and describe the emotional state of a user based on a commonly used three-variable system ( Valence , Arousal, or Dominance) and ascribing a simple number system to the three variables as set forth on page 7 of applicant’s specification , for example determining the user is “sad”, determining a target emotional state which as disclosed is a human interface to receive human input with respect to a desired or target emotional state for example setting the target state for the user to be “happy”, an automatic selector which is disclosed on page 8 of the specification as a computer which could be done by a human by hand or in the mind to determine the degree of the emotional state relative to the target state (the user is very sad) and select a series of music files from a playlist that could be played through headphones ( merely a means to implement the abstract idea) worn by the user to listen to the music and help them achieve the desired emotional state based on the type of music ( music corresponding to happy feelings based on style, beat tempo etc) and its relationship to the target emotional state ( happy music to help the user feel happy instead of sad) , the user can manually add or select additional music files to add to the playlist and update the playlist previously selected. With respect to claim 15 , similarly, t he method is directed to reading or observing EEG signals or data, a computer implanted module as disclosed for determining the emotional state of a user which could be done by hand or in the mind to observe and describe the emotional state of a user based on a commonly used three-variable system ( Valence, Arousal, or Dominance) and ascribing a simple number system to the three variables as set forth on page 7 of applicant’s specification, for example determining the user is “sad”, determining a target emotional state which as disclosed is a human interface to receive human input with respect to a desired or target emotional state for example setting the target state for the user to be “happy”, an automatic selector which is disclosed on page 8 of the specification as a computer which could be done by a human by hand or in the mind to determine the degree of the emotional state relative to the target state (the user is very sad) and select a series of music files from a playlist that could be played through headphones ( merely a means to implement the abstract idea) worn by the user to listen to the music and help them achieve the desired emotional state based on the type of music ( music corresponding to happy feelings based on style, beat tempo etc) and its relationship to the target emotional state ( happy music to help the user feel happy instead of sad), the user can manually add or select additional music files to add to the playlist and update the playlist previously selected. Step 2a Prong 2 - The recitation of the additional elements of a reader of EEG signals , a module for determining an emotional state, means for determining a target emotional state, and electroacoustic transducer or headphones to output or play the sound merely invokes such additional element(s) as a tool s to perform the abstract idea. MPEP 2106.05(f). Further, the recitation of these additional element(s) in the claim s generally links the use of the abstract idea to a particular technological environment or field of use, i.e., a computerized environment. MPEP 2106.05(h). As such, under Prong 2 of Step 2A, when considered both individually and as a whole, the limitations of claims 1 and 15 are not indicative of integration into a practical application (Prong 2, Step 2A: NO). MPEP 2106.04(d) . These additional elements are all recited at an extremely high level of generality and may be interpreted as generic computing devices and output devices used to implement the abstract idea. Per MPEP 2106.05(f), implementing an abstract idea on a generic computing device does not integrate an abstract idea into a practical application in Step 2A Prong Two, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea on a generic computer. As such, these additional elements do not integrate the abstract idea into a practical application and therefore the claim is directed to the judicial exception . Step 2B - The recitation of the additional elements is acknowledged, as identified above with respect to Prong 2 of Step 2A. These additional elements do not add significantly more to the abstract idea for the same reasons as addressed above with respect to Prong 2 of Step 2A. Even when considered as an ordered combination, the additional elements of claims 1 and 15 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claim s 1 and 15 that transform the judicial exception into a patent eligible application such that the claim amounts to significantly more than the judicial exception itself (Step 2B: NO). MPEP 2106.05. Accordingly, under the Subject Matter Eligibility test, claim s 1 and 15 are ineligible. Furthermore, the dependent claims, 2 - 14 do not add significantly more to the abstract idea for the same reasons as addressed above with respect to Prong 2 of Step 2A. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 – 4 and 6 - 15 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Osborne et al. ( US 20180027347 ) hereinafter Osborne et al . Osborne et al . teaches a method and system for analyzing audio (eg. music) tracks. A predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical, limbic and subcortical regions in the brain is described. Sounds are analyzed so that appropriate sounds can be selected and played to a listener in order to stimulate and/or manipulate neuro-physiological arousal in that listener. Regarding claim s 1 and 15, Osborne et al . teaches a real-time reader of electroencephalographic signals , a module for determining an emotional state based on an electroencephalographic signal , a means for determining a target emotional state, an automatic selector of a sequence of at least one sound file, from a previously assembled list of sound files, based on the target emotional state, the EEG signal, and at least one parameter associated to each said sound file, and an electroacoustic transducer configured to play the selected sequence of sound files, characterized in that the device also comprises a secondary selector of a sound file, and a means for updating the sequence based on the sound file selected manually by the secondary selector. Note figures 1-3 , paragraphs [0017]-[0018] [0017] The invention is implemented in a system called X-System. X-System includes a database of music tracks that have been analyzed according to musical parameters derived from or associated with a predictive model of human neuro-physiological functioning and response to those audio tracks . [ 0018] Measurement of neuro-physiological state may be done using a variety of techniques, such as electro-encephalography, [0020 ] X-System may use this sensor data to sub-select music from any chosen repertoire, either by individual track or entrained sequences, that when listened to, will help the user to achieve a target state of excitement, relaxation, concentration, alertness, heightened potential for physical activity etc. This is achieved by analyzing music tracks in the user's database of music (using the musical parameters derived from the predictive model of human neuro-physiological response) and then automatically constructing a playlist of music , which may also be dynamically recalculated based on real-time bio-feedback , to be played to the user in order to lead her/him towards, and help to maintain her/him at, the desired target state. , [0034], [0103] , [0226], [0231], [0305] Explicit overrides will permit the user to manually skip a particular track either once, or to permanently blacklist it to ensure it will never be chosen again for them. In addition to their effect, these overrides will feed the decision model. , [0336] 0335] Additional modalities include: [0336] EEG type sensors or ‘caps’ for brainwave activity [0337] Electromyograph muscular tone/trigger rate [0338] Multi-point ECG for high-resolution heart waveform [0339] Breathing depth/rate [0340] Eye-tracking/Gaze/blink analysis . [0342] Consolidation of sensors into a single package such as a wrist-watch or headphone style appliance would be ideal. [0097] A sensor may optionally be used to establish the state of arousal of the user, and music categorized by predictive modelling of the INRM paradigm can then be streamed/played back to achieve the target arousal state for that user. In an alternative implementation sensors are not provided. Instead, both initial and target states are self-selected, either directly or indirectly (such as, f or example, by selecting a ‘start song’ which has an arousal value relative to the user's true current state ). For example, where the user makes a poor initial selection, he/she might skip from song to song initially until one is found (i.e. by trial and error) that is both ‘liked’ and ‘fits’ with their initial state. From there, X-System, in a sensor-less implementation, may create a playlist tending towards the desired arousal state based on expected normal human response. Regarding claim 2, Osborne et al . teaches a collector of sound file identifiers; and a sound file classifier configured to associate, to at least one sound file identifier, a parameter representative of an emotional state fostered by that sound file. Note figures 1-3 , [0010] The invention is a computer implemented system for analyzing sounds, such as audio tracks, the system automatically analyzing sounds according to musical parameters derived from or associated with a predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical, limbic and subcortical regions in the brain , [0016] The musical parameters derived from or associated with the predictive model may relate to rhythmicity, and harmonicity and may also relate to turbulence—terms that will be explained in detail below. The invention may be used for the search, selection, ordering (i.e. sequencing), use, promotion, purchase and sale of music. It may further be used to select, modify, order or design non-musical sounds to have a desired neuro-physiological effect in the listener, or to permit selection, for example in designing or modifying engine exhaust notes, film soundtracks, industrial noise and other audio sources. [0017], [0020], [0222], [0227] FIG. 5 shows a desired architecture overview. FIG. 5 shows an implementation of the X-System invention where a primary music library, and analysis software resides on a user PC that is operable, remotely or locally by the listener or a third party, with the ability to transfer a selection of music to a personal music player device, which then generates a dynamic playlist based on the available music. Regarding claim 3, Osborne et al . teaches wherein the classifier is a trained machine learning system. Note figures 1-3 , [0230] A user initially provides the system with their personal music collection (or uses an online library of streamable or downloadable music). This is analyzed for level of excitement, using INRM categorization in combination with signal processing and machine learning techniques. The user then synchronizes this information with their music player and selects a level of excitement/arousal; someone other than the user may also select the excitement level. [0280], [0301]. Regarding claim 4, Osborne et al . teaches wherein the trained machine learning system is a supervised neural network configured to receive, as an input layer, parameter values and, as an output layer, emotional state indicators corresponding to the input layer. Note figures 1-3, [0231] X-System may learn from individual users the range of their physiological responses in order to identify relative levels of arousal, and individually calibrate the diagnostic software. It may also learn about their personal preferences as already articulated through their choice of repertoire. X-System may also go directly from a set of musical features, using a neural network to predict the effect of these on physiological measurements, without first reducing the features to an expected excitement/arousal level. Regarding claim 6, Osborne et al . teaches wherein the machine learning system is also pre-trained by using a set of data not specific to the user. Note Figures 1-3 and Paragraphs [0271] – [0280] discusses collecting and utilizing as part of the learning generalized group or population data to set moods for large groups of people. Regarding claim 7, Osborne et al . teaches wherein at least one sound file is associated to an indicator of a behavior of the user regarding each said sound file, the automatic selector being configured to select a sequence of at least one sound file based on a value of this indicator for at least one sound file. Note Figures 1-3, paragraphs [0017]-[0018] [0017] The invention is implemented in a system called X-System. X-System includes a database of music tracks that have been analyzed according to musical parameters derived from or associated with a predictive model of human neuro-physiological functioning and response to those audio tracks. [0018] Measurement of neuro-physiological state may be done using a variety of techniques, such as electro-encephalography, [0020] X-System may use this sensor data to sub-select music from any chosen repertoire, either by individual track or entrained sequences, that when listened to, will help the user to achieve a target state of excitement, relaxation, concentration, alertness, heightened potential for physical activity etc. Regarding claim 8, Osborne et al . teaches wherein the indicator of the user's behavior is a parameter representative of a number of plays and/or of a number of playback interruptions in favor of another sound track. Note figures 1-3 and paragraph [0305] The playback component handles 2 tasks. Controlling the music playback, and operating a real-time arousal analysis/entrainment model , based on sensor input . The component may be responsible for actually playing the music, or may be a control layer on top of an existing media player such as iTunes/Windows Media Player, etc. The arousal analysis model will be based on the X-system INRM model, using the pre-computed values from the Music Analysis component as a starting point. The user will select a desired outcome, and the sensors will be used to gauge progress towards that outcome of each track. Explicit overrides will permit the user to manually skip a particular track either once, or to permanently blacklist it to ensure it will never be chosen again for them. In addition to their effect, these overrides will feed the decision model. Regarding claim 9, Osborne et al . teaches wherein the automatic selector comprises a sound file filter based on at least one indicator of a behavior of the user regarding at least one sound file, the selector being configured to select a sound files sequence from a list of sound files filtered by the filter. Note figures 1-3, and paragraphs [0175] Categorization may be preceded by aggregation, documenting provenance, genre and other data for music tracks. This may be according to an industry standard such as that provided by Gracenote®, it may be the result of individual user editorial, crowd-sourcing methods such as collaborative filtering, or may be the result of future aggregation standards based on, for example, digital signature analysis. The purpose of aggregation is to allow the user to choose a preferred musical style, though it is not strictly necessary for the proper functioning of X-System. [0271] The main directions of product improvement and expansion are as follows: [0272] Identification of emotional responses to music stimulated by memories or response to lyrics or other aspects of a song or piece of music rather than biology—developed by filtering out the expected physiological responses. Regarding claim 10 , Osborne et al. teaches wherein a parameter used by the automatic selector to select a sound file sequence is representative of an emotional state of the user . Note figures 1-3, and paragraphs 0010] The invention is a computer implemented system for analyzing sounds, such as audio tracks, the system automatically analyzing sounds according to musical parameters derived from or associated with a predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical, limbic and subcortical regions in the brain , [0017] X-System includes a database of music tracks that have been analyzed according to musical parameters derived from or associated with a predictive model of human neuro-physiological functioning and response to those audio tracks. , [0175], [0271] . Regarding claim 11 , Osborne et al. teaches wherein a parameter used by the automatic selector to select a sound files sequence is, in addition, a technical parameter chosen from the duration, mode, tonality, quantification of the beat and the tempo of the sound file. Note figures 1-3 and paragraphs [0006] The third method is to analyze metrics computed as a function of the music itself ( usually tempo, but may also include a measure of average energy ), and relate such metrics to the desired state of arousal of the subject. There are several such systems. Most rely on either ‘entrainment’ (in the Huygens sense, namely the tendency to synchronize to an external beat or rhythm) or on the association of increased tempo (and in one known case, energy) with increased effort or arousal (and the converse for reduced tempo and energy). [0129] X-System detects a basic, “default” rhythmic pulse in terms of beats per minute. There are often difficulties in establishing meter , but X System approximates the arousal effect of metrical structures by averaging the accumulation of power of rhythmic events over time. The power of a rhythmic event is defined as the ratio of the energy before the beat to the energy after it. In one very simple implementation, the beats per minute value (B) is combined with the mean of the beat strength (S) to produce a value for rhythmicity . [0134] . Regarding claim 12 , Osborne et al. teaches wherein the sound file sequence is configured to have a gradient increasing emotional state value corresponding to the target emotional state. Note figures 1-3 and paragraphs [0017] The invention is implemented in a system called X-System. X-System includes a database of music tracks that have been analyzed according to musical parameters derived from or associated with a predictive model of human neuro-physiological functioning and response to those audio tracks. X-System may include also a sensor, a musical selection algorithms/playlist calculator for selecting suitable tracks and a connection to a music player. Once the sensor is activated, the system diagnoses the subject's initial level of neuro-physiological arousal and automatically constructs a playlist derived from a search of an X-System encoded musical or sound database that will first correspond to or mirror this level of arousal, then lead the listener towards, and help to maintain her/him at, the desired level of arousal. This is interpreted by the examiner as a gradient increase in emotional state value towards the target emotional state. The playlist is recalculated as necessary based on periodic measurements of neuro-physiological or other indicative signals. [0020]. Regarding claim s 13 and 14 , Osborne et al . teaches wherein the reader is non-invasive and is an electroencephalogram type of headset . Note figures 1-3, and paragraphs [0336] 0335] Additional modalities include: [0336] EEG type sensors or ‘caps’ for brainwave activity [0337] Electromyograph muscular tone/trigger rate [0338] Multi-point ECG for high-resolution heart waveform [0339] Breathing depth/rate [0340] Eye-tracking/Gaze/blink analysis. [0342] Consolidation of sensors into a single package such as a wrist-watch or headphone style appliance would be ideal. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Osborne et al. (US 20180027347) hereinafter Osborne et al. in view of Craik et al.( US 20200383598 ) hereinafter Craik et al. Osborne et al. teaches the claimed invention as set forth above including classifying music with respect to level s of arousal, valance and counter-arousal , [0013] In one implementation, tracks from a database of music are analyzed in order to predict automatically the neuro-physiological effect or impact those sounds will have on a listener. Different audio tracks and their optimal playing order can then be selected to manipulate neuro-physiological arousal , state of mind and/or affect—for example to move towards, to reach or to maintain a desired state of arousal or counter-arousal, state of mind or affect (the term ‘affect’ is used in the psychological sense of an emotion, mood or state). , [0126] Both vertical and linear harmonicity are powerful indices of valence (Fritz 2009), or whether a sound is “positive” or “negative”, “pleasing” or “not so pleasing”. Linear harmonicity may track the evolution of valence indices over time—the principle is simply the more harmonic, the more positive valence, the less harmonic, the more negative valence. [0152] ‘Turbulence’ is therefore a measure of rate of change and extent of change in musical experience. These factors seem to activate core emotional systems of the brain, such as the amygdala and periaqueductal grey, which are in turn linked to autonomic and endocrine systems. At high levels of musical energy turbulence may enhance arousal; at low levels it may add to the counter - arousal effect . However, Osborne et al. does not specifically teach Valence, Arousal, and Dominance as three characteristics. Craik et al. teaches in the same field of endeavor Apparatuses and methods for non-invasively detecting and classifying transcranial electrical signals are disclosed herein. In an embodiment, system for detecting and interpreting transcranial electrical signals includes: a headset including a plurality of electrodes arranged for detection of the user's transcranial electrical signals; a display configured to display information to the user while the user wears the headset; and a control unit programmed to: (i) receive data relating to the transcranial electrical signals detected by the electrodes of the headset; (ii) create a data matrix with the received data; (iii) convert the data matrix into one or more user values; (iv) define a user output state based on the one or more user values; and (iv) cause alteration of an aspect of the display based on the user output state. [0006] The present disclosure proposes apparatuses and methods for non-invasively detecting and classifying transcranial electrical signals. It is advantageous, for example, for therapeutic and entertainment purposes, to be able to use EEG data to determine a person's cognitive states in ways besides simply viewing that person's expression and body language. This is specifically applicable to the determination of a person's emotional state , as a subjective analysis of a person's emotional state based on visual evidence may not be reliable. It is also advantageous to be able to use EEG data to control images, videos, audio . [0178] FIG. 20A illustrates an example method 500a illustrating how a current user may calibrate headset 12 for use in determining one or more emotional state of a user. It should be understood that some of the steps described herein may be reordered or omitted, while other steps may be added, without departing from the spirit and scope of method 500a of FIG. 20A. It should further be understood that one or more of the steps of method 500a may be controlled by the control unit of neural analysis system 10 based on instructions stored on a memory and executed by a processor. [0179] In this example embodiment, the one or more user values are emotional values, and the one or more user output state is an emotional state. The emotional values can include, for example, one or more valence value, one or more arousal value, and one or more dominance value . For example, a first user value can be a valence value, a second user value can be an arousal value, and a third user value can be a dominance value. The emotional state can include, for example, an emotion felt by the user (e.g., joy, anger, etc.). Therefore, It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the device and method of Osborne et al. classifying the sound not just by Valence, Arousal and Counter-Arousal as recognized by Osbourne et al. but by including the three known characteristics of emotional state Valence, Arousal, and Dominance as taught by Craik et al. to move towards, and reach or to maintain a desired state of emotion. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Denison ( US 20140316230 ) teaches a method comprising: receiving electroencephalography (EEG) data relating to a user; generating processed EEG data by applying a first processing algorithm to the obtained EEG data; transferring the processed EEG data to a remote storage device together with at least a unique identity of the user. MARCHINI ( US 20250046294 ) teaches receiving a command to load a sample audio file into a memory by way of a graphical user interface. A first feature curve relating to an auditory feature of the sample audio file is provided for display, where the auditory feature is in a first style. A selection of a second style is received by way of the graphical user interface. The auditory feature is transformed from the first style to the second style based on application of a trained style transfer machine learning model to cropped sample tensors derived from the auditory feature. A second feature curve relating to the auditory feature is displayed in the second style. Flickinger ( 11839473 ) teaches Systems and methods for estimating emotional states, moods, effects of an individual and providing feedback to the individual or others are disclosed. Systems and methods that provide real time detection and monitoring of physical aspects of an individual and/or aspects of the individual's activity and means of estimating that person's emotional state or affect and change to those are also disclosed. Real time feedback to the individual about the person's emotional change, change or potential change is provided to the user, helping the user cope, adjust or appropriately act on their emotions. Vartakavi et al.( 20180024810 ) teaches A machine is configured to identify a media file that, when played to a user, is likely to modify an emotional or physical state of the user to or towards a target emotional or physical state. The machine accesses play counts that quantify playbacks of media files for the user. The playbacks may be locally performed or detected by the machine from ambient sound. The machine accesses arousal scores of the media files and determines a distribution of the play counts over the arousal scores. The machine uses one or more relative maxima in the distribution in selecting a target arousal score for the user based on contextual data that describes an activity of the user. The machine selects one or more media files based on the target arousal score. The machine may then cause the selected media file to be played to the user. Rowe et al.( 20120233164 ) teaches collections of digital music and sound that effectively elicit particular emotional responses as a function of analytical features from the audio signal and information concerning the background and preferences of the subject. The invention can change emotional classifications along with variations in the audio signal over time. Interacting with a listener, the invention locates music with desired emotional characteristics from a central repository, assembles these into an effective and engaging "playlist" (sequence of songs), and plays the music files in the calculated order to the listener. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT BRIAN L CASLER whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4956 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-Th 6:30 to 4:30 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Charles Marmor can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-4730 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIAN L CASLER/ Primary Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Mar 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599536
ADULT TOY
2y 5m to grant Granted Apr 14, 2026
Patent 12589200
FLUIDIC SIGNAL CONTROL DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12575815
GUIDANCE AND TRACKING SYSTEM FOR TEMPLATED AND TARGETED BIOPSY AND TREATMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12502133
SENSING SYSTEM INCLUDING LAYERED MICROPROBE
2y 5m to grant Granted Dec 23, 2025
Patent 12415084
DEVICE FOR CARDIOLOGIC MAGNETIC AND OPTICAL STIMULATION
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
95%
With Interview (+22.9%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month