Prosecution Insights
Last updated: April 17, 2026
Application No. 17/727,733

METHOD AND SYSTEM FOR TRANSLATION OF BRAIN SIGNALS INTO ORDERED MUSIC

Final Rejection §102§103
Filed
Apr 23, 2022
Examiner
BRINEY III, WALTER F
Art Unit
2692
Tech Center
2600 — Communications
Assignee
unknown
OA Round
4 (Final)
65%
Grant Probability
Favorable
5-6
OA Rounds
2y 12m
To Grant
69%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
352 granted / 540 resolved
+3.2% vs TC avg
Minimal +4% lift
Without
With
+3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
58 currently pending
Career history
598
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§102 §103
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . See 35 U.S.C. § 100 (note). Art Rejections Anticipation The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 15–17 and 19 are rejected under 35 U.S.C. § 102(a)(1) as being unpatentable over the combination of US Patent Application Publication 2020/0365125 (published 19 November 2020) (“Senn”). Claim 15 is drawn to a system. The following table illustrates the correspondence between the claimed system and the Senn reference. Claim 15 The Senn Reference “15. A system comprising: “a CPU, a computer readable memory and a computer non-transitory readable storage medium associated with a computing device; The Senn reference similarly describes a computer system having a computing device that executes instructions stored in a storage medium to generate musical compositions from brainwaves extracted from EEG signals. Senn at Abs., ¶¶ 3, 5, 74–78, FIG.1. “receiving data from an electroencephalogram device worn by a user; Senn describes receiving bio/neuromatic (B/N) data from an EEG headset worn by a user. Id. at ¶ 50, FIG.1. In that case, the B/N data includes EEG data, or signals related to brainwaves, like signals in the Alpha, Beta, Gamma, Theta, Delta ranges. Id. at ¶¶ 108–110, Table 1. “decoding the data received by the electroencephalogram device into individual brainwaves; Senn describes a creative circuit module 108 that separates the EEG signals into a plurality of different brainwaves (e.g., Alpha, Beta, etc.) based on frequency selection, or filtering. Id. “associating a first set of brainwaves to a sound and a second set of brainwaves to an effect, wherein the sound and the effect are dynamic based on the received individual brainwaves; Senn’s creative circuit module 108 converts each brainwave into a corresponding musical sound in the MIDI format. Id. at ¶¶ 71, 97–103, 156–162. FIGs.4B, 11. For example, different brainwaves may be associated with a different musical instrument. Id. at ¶ 109. Each brainwave is also associated with an effect, such as a pitch selector and velocity selector. Id. at ¶¶ 156–162, FIG.11. The resulting sounds and effects are dynamic as they vary with the brainwaves. Id. “performing a translation of each of the brainwaves into data; The rhythm, pitch, melody and harmony produced are then chosen based on the brainwaves. Id. at ¶¶ 97–103, 156–162. For example, the brainwaves are isolated by frequency and then manipulated by mapping them to a human audible scale. Id. “associating at least one of the second set of brainwaves associated with an effect to at least one of the first set of brainwaves associated with a sound; “applying the at least one effect at least one associated sound Senn describes controlling the velocity of a sound associated with one brainwave by forming a ratio between the power of two brainwaves to control the velocity of a note associated with one of the brainwaves. Id. at ¶ 161. This results in a modified note, or sound, whose velocity is modified. Id. “formatting the modified sound so that it can be received by a mixing device, wherein a manipulated sounds are created; and Senn describes representing brainwaves as independent MIDI tracks corresponding to different tones or instruments. Id. at ¶ 72. For example, a brainwave from each of multiple users is associated with a particular instrument. And because each track, including a manipulated note/sound, is formatted in MIDI, it is formatted for receipt by a MIDI mixer capable of modifying and combining each MIDI track into a MIDI composition. See id. at ¶ 50, 130, 200–201, 212–213, FIGs.8, 9. Senn further describes independently controlling the volume, or velocity, of each MIDI track instrument representing each brainwave. Id. at ¶¶ 146, 156, 159, 161. “generating a musical composition and a visual representation of the manipulated sounds.” Senn’s creative circuit module 108 collects all the MIDI pitches into an output array that is sent to an audio engine (e.g., 118, 518) that reproduces the audio data in the array to produce a musical composition. Id. at ¶¶ 71, 72, 105, 116, 161, FIGs.1, 5. The engine sends the composition to an audio output device (e.g., 114, 514) that outputs the composition as sound to a user (e.g. 501). Id. at ¶¶ 113–116, FIG.5. Senn also generates visual data by mapping EEG brain waves to controllable visual parameters. Id. at ¶¶ 84, 91, 164–166. In particular, Senn modifies geometric shape, color and angle based on an output array. Id. Table 1 For the foregoing reasons, the Senn reference anticipates all limitations of the claim. Claim 16 depends on claim 15 and further requires the following: “wherein the effect adjusts the sound to be within a note threshold value.” Similarly, Senn manipulates the extracted brainwave signals to fall within a predetermined pitch. Senn at ¶¶ 156–160, FIG.11. For the foregoing reasons, the Senn reference anticipates all limitations of the claim. Claim 17 depends on claim 16 and further requires the following: “wherein the effect is related to the pitch and velocity of the sound.” Senn’s brainwaves determine both pitch and velocity of sound. Senn at ¶¶ 156–162, FIG.11. For the foregoing reasons, the Senn reference anticipates all limitations of the claim. Claim 19 depends on claim 16 and further requires the following: “further comprising, manipulating the sounds from a monophonic to a polyphonic.” Senn describes generating music in various manners, such as a single person playing a melody with a solo instrument, or multiple people playing in harmony. Senn at ¶¶ 45, 128, 148, 218. For the foregoing reasons, the Senn reference anticipates all limitations of the claim. Obviousness The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–3, 5–14, 18, 20 and 21 are rejected under 35 U.S.C. § 103 as being unpatentable over the combination of Senn; US Patent Application Publication 2004/0077934 (published 22 April 2004) (“Massad”); and MusicRadar, How to Use Basic ADSR Filter Envelope Parameters (https://www.musicradar.com/tuition/tech/how-to-use-basic-adsr-filter-envelope-parameters-578874) (last accessed 07 march 2024) (published 21 June 2013) (“MusicRadar”)1. Claim 1 is drawn to a computer-implemented method. The following table illustrates the correspondence between the claimed method and the Senn reference. Claim 1 The Senn Reference “1. A computer-implemented method comprising: The Senn reference similarly describes a computer-implemented method to generate musical compositions from brainwaves extracted from EEG signals. Senn at Abs., ¶¶ 3, 5. “receiving, by one or more processors, data from at least one device worn by a user, wherein data is collected related to at least one brainwave; Senn describes receiving bio/neuromatic (B/N) data from an EEG headset worn by a user. Id. at ¶ 50, FIG.1. In that case, the B/N data includes EEG data, or signals related to brainwaves, like signals in the Alpha, Beta, Gamma, Theta, Delta ranges. Id. at ¶¶ 108–110, Table 1. “separating, by the one or more processors, the collected data into individual data streams related to the one or more brainwaves; Senn describes separating the EEG signal into a plurality of different brainwaves (e.g., Alpha, Beta, etc.) based on frequency selection, or filtering. Id. “associating, by the one or more processors, a first set of the or instrument and a second set of the individual data streams with a dynamic effect, wherein the effect is able to modify Senn’s computing device 106 converts each brainwave into a corresponding musical sound. Id. at ¶¶ 97–103, 156–162. FIGs.4B, 11. The rhythm, pitch, melody and harmony produced are then chosen based on the brainwaves. Id. For example, the brainwaves are isolated by frequency and then manipulated by mapping them to a human audible scale. Id. Additionally, other brainwaves are mapped to effects, such as velocity determination. Id. For example, one set of brainwaves including at least one of alpha, beta, gamma, delta waves is mapped to a particular instrument. Id. at ¶ 109. Another set of waves are mapped to velocity effects, such that a ratio of the second set of waves may determine a velocity effect that is applied to an instrument and that changes over time as the ratio changes. Id. at 161. “performing, by the one or more processors, at least one dynamic effect from the second set of the individual data streams to each of the individual sounds or instruments In the case of pitch determination, the mapped, or manipulated, frequency is further manipulated to match a tonal center stored within a pitch vocabulary datastore (e.g., a datastore determined by analyzing other music in the environment). Id. Additionally, each note is adjusted in velocity, for example, based on a ratio between waves in a second set of waves. “applying, by the one or more processors, at least one filter to each of the sounds or instruments and establishing a note envelope; The resulting pitch is then filtered by a velocity calculation. The process is repeated for every piece of B/N data to produce an output array. Id. However, Senn does not describe filtering sounds to establish a note envelope “creating, by the one or more processors, an independently controllable representation of each of the at least one brainwaves, “wherein the independently controllable representation of each of the at least one brainwaves are formatted to be received by a mixer, and Senn describes representing brainwaves as independent MIDI tracks corresponding to different tones or instruments. Id. at ¶ 72. For example, a brainwave from each of multiple users is associated with a particular instrument. And because each track is formatted in MIDI, it is formatted for receipt by a MIDI mixer capable of modifying and combining each MIDI track into a MIDI composition. See id. at ¶ 50, 130, 200–201, 212–213, FIGs.8, 9. “wherein the mixer is able to independently adjust each of the independently controllable representation of each of the at least one brainwaves; and While Senn describes combining, or mixing, MIDI tracks containing MIDI instrument representations of brainwaves. Id. Senn further describes independently controlling the volume, or velocity, of each MIDI track instrument representing each brainwave. Id. at ¶¶ 146, 156, 159, 161. “generating, by the one or more processors, each of the sounds, wherein a musical composition is formed.” Senn sends the output array to an audio engine that reproduces the audio data in the array to produce a musical composition. Id. at ¶¶ 105, 161. Table 2 The table above shows that the Senn reference describes a method that corresponds closely to the claimed method. Senn does not anticipate the claimed method, however, because the Senn reference does not describe filtering sounds to establish a note envelope. The differences between the claimed invention and the Senn reference are such that the invention as a whole would have been obvious to one of ordinary skill in the art at the time this Application was effectively filed. The Senn reference describes generating MIDI notes based on EEG signals. Senn describes adjusting timbre, however, does not describe applying any type of envelope filtering, such as attack, decay, sustain or release. Senn at ¶ 148. The Massad reference describes another system for generating MIDI music based on biometric signals. Massad at Abs., ¶¶ 106–111, 381, 395. Massad teaches and suggests that due to the use of MIDI, the instruments may be filtered by adjusting their envelope settings, including attack, decay and echo. Id. at ¶ 395. The prior art further teaches other MIDI envelope parameters, including the claimed sustain and release settings. MusicRadar article. Accordingly, it would have been obvious for one of ordinary skill in the art at the time of filing to have modified Senn’s system to include known MIDI envelope filters that adjust attack, decay, sustain and release of MIDI notes. One of ordinary skill in the art would have reasonably recognized that adding known MIDI envelope filters would allow for increased musical expression when converting EEG signals into MIDI musical notes. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 2 depends on claim 1 and further requires the following: “wherein the at least one manipulation of the individual data streams, further comprising, adjusting, by the one or more processors, a note within a predetermined pitch.” Similarly, Senn manipulates the extracted brainwave signals to fall within a predetermined pitch. Senn at ¶¶ 156–160, FIG.11. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 3 depends on claim 1 and further requires the following: “wherein the at least one manipulation of the individual data streams, further comprising, adjusting, by the one or more processors, a note within a predetermined velocity.” Senn describes scaling, or manipulating, the brainwave power (or a ratio of two powers) to determine a velocity value within a predefined MIDI velocity range of 0 to 127. Senn at ¶¶ 9, 72, 161, 162. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 5 depends on claim 4 and further requires the following: “further comprising, adjusting, by the one or more processors, an attack, a decay, a sustain, and a release of the note.” Claim 6 depends on claim 1 and further requires the following: “further comprising, manipulating, by the one or more processors, of the musical composition is in a format, that, so that the musical composition can be received by a mixer.” Similarly, Senn describes storing a musical composition from a single user in a MIDI format so that it can be merged, or mixed, with a MIDI musical composition from another user. Senn at ¶¶ 45, 50, 128, 163, 218. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 7 depends on claim 1 and further requires the following: “further comprising, switching, by the one or more processors, the note from monophonic to polyphonic.” Senn describes generating music in various manners, such as a single person playing a melody with a solo instrument, or multiple people playing in harmony. Senn at ¶¶ 45, 128, 148, 218. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 20 depends on claim 1 and further requires the following: “wherein the at least one manipulation of the individual data streams, further comprising, adjusting, by the one or more processors, a note within a predetermined range.” Similarly, Senn adjusts the frequency of a signal to be within a human audible range and to be diatonic within a musical key. Senn at ¶¶ 158–160, FIG.11. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 21 depends on claim 1 and further requires the following: “wherein the device collects physiological signals from the wearer's body.” Senn collects EEG signals from a user’s body with an EEG sensor. Senn at ¶ 50, FIG.1. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 8 is drawn to a computer program product. The following table illustrates the correspondence between the claimed product and the Senn reference. Claim 8 The Senn Reference “8. A computer program product comprising: “a computer non-transitory readable storage medium having program instructions embodied therewith, the program instructions executable by a computing device to cause the computing device to: The Senn reference similarly describes a computer system having a computing device that executes instructions stored in a storage medium to generate musical compositions from brainwaves extracted from EEG signals. Senn at Abs., ¶¶ 3, 5, 74–78, FIG.1. “receiving data from an device worn by a user; Senn describes receiving bio/neuromatic (B/N) data from an EEG headset worn by a user. Id. at ¶ 50, FIG.1. In that case, the B/N data includes EEG data, or signals related to brainwaves, like signals in the Alpha, Beta, Gamma, Theta, Delta ranges. Id. at ¶¶ 108–110, Table 1. “decoding the data received by the electroencephalogram device into dynamic individual physiological signals; Senn describes a creative circuit module 108 that separates the EEG signals into a plurality of different brainwaves (e.g., Alpha, Beta, etc.) based on frequency selection, or filtering. Id. “associating each of the dynamic individual physiological signals to a sound or to an effect of a sound; Senn’s creative circuit module 108 converts each brainwave into a corresponding musical sound in the MIDI format. Id. at ¶¶ 71, 97–103, 156–162. FIGs.4B, 11. For example, different brainwaves may be associated with a different musical instrument. Id. at ¶ 109. Brainwaves are also associated with velocity effects. Id. at ¶¶ 161, 162. “performing a translation to each of the dynamic individual physiological signals collected data, wherein each dynamic individual physiological signals that is associated with the sound is associated with an instrument and the dynamic individual physiological signals that are associated with an effect of the sound are applied to at least one of the dynamic individual physiological signals that is associated with the sound; Senn describes translating each EEG signal, which is an inherently dynamic signal that changes with a user’s brain activity, to an instrument. Id. at ¶ 109. The rhythm, pitch, melody and harmony produced are then chosen based on the brainwaves. Id. at ¶¶ 97–103, 156–162. For example, the brainwaves are isolated by frequency and then manipulated by mapping them to a human audible scale. Id. Senn also describes using EEG signals to determine velocity. Id. at ¶¶ 161, 162. “applying a plurality of manipulations to the translated data for each of the individual physiological signals; In the case of pitch determination, the mapped, or manipulated, frequency is further manipulated to match a tonal center stored within a pitch vocabulary datastore (e.g., a datastore determined by analyzing other music in the environment). Id. at ¶¶ 97–103, 156–162. The pitches are further manipulated to determine velocity. Id. “applying at least one filter to each of the dynamic physiological signals and establishing a note envelope; The resulting pitch is then filtered by a velocity calculation. The process is repeated for every piece of B/N data to produce an output array. Id. However, Senn does not describe filtering sounds to establish a note envelope “generating a series of sounds from the manipulated translated data from the dynamic individual physiological signals associated with a sound, and wherein the dynamic individual physiological signals which are associated with an effect of a sound are applied to the corresponding manipulated translated data from the dynamic individual physiological signals associated with a sound; Senn’s creative circuit module 108 collects all the MIDI pitches into an output array that is sent to an audio engine (e.g., 118, 518) that reproduces the audio data in the array to produce a musical composition. Id. at ¶¶ 71, 72, 105, 116, 161, FIGs.1, 5. The engine sends the composition to an audio output device (e.g., 114, 514) that outputs the composition as sound to a user (e.g. 501). Id. at ¶¶ 113–116, FIG.5. “creating a visual representation of the dynamic individual physiological signals, wherein the fluctuations of the dynamic individual physiological signals are represented; Similarly, Senn generates visual data by mapping EEG brain waves to controllable visual parameters. Id. at ¶¶ 84, 91, 164–166. In particular, Senn modifies geometric shape, color and angle based on an output array. Id. “formatting the series of sounds and supplying the formatted series of sounds to a mixing device, wherein the mixing device provides for additional adjustments to each of the series of sounds independently; and Senn describes representing brainwaves as independent MIDI tracks corresponding to different tones or instruments. Id. at ¶ 72. For example, a brainwave from each of multiple users is associated with a particular instrument. And because each track is formatted in MIDI, it is formatted for receipt by a MIDI mixer capable of modifying and combining each MIDI track into a MIDI composition. See id. at ¶ 50, 130, 200–201, 212–213, FIGs.8, 9. Senn further describes independently controlling the volume, or velocity, of each MIDI track instrument representing each brainwave. Id. at ¶¶ 146, 156, 159, 161. “outputting, a musical composition from the series of sounds.” Senn’s creative circuit module 108 collects all the MIDI pitches into an output array that is sent to an audio engine (e.g., 118, 518) that reproduces the audio data in the array to produce a musical composition. Id. at ¶¶ 71, 72, 105, 116, 161, FIGs.1, 5. The engine sends the composition to an audio output device (e.g., 114, 514) that outputs the composition as sound to a user (e.g. 501). Id. at ¶¶ 113–116, FIG.5. Table 3 The table above shows that the Senn reference describes a method that corresponds closely to the claimed method. Senn does not anticipate the claimed method, however, because the Senn reference does not describe filtering sounds to establish a note envelope. The differences between the claimed invention and the Senn reference are such that the invention as a whole would have been obvious to one of ordinary skill in the art at the time this Application was effectively filed. The Senn reference describes generating MIDI notes based on EEG signals. Senn describes adjusting timbre, however, does not describe applying any type of envelope filtering, such as attack, decay, sustain or release. Senn at ¶ 148. The Massad reference describes another system for generating MIDI music based on biometric signals. Massad at Abs., ¶¶ 106–111, 381, 395. Massad teaches and suggests that due to the use of MIDI, the instruments may be filtered by adjusting their envelope settings, including attack, decay and echo. Id. at ¶ 395. The prior art further teaches other MIDI envelope parameters, including the claimed sustain and release settings. MusicRadar article. Accordingly, it would have been obvious for one of ordinary skill in the art at the time of filing to have modified Senn’s system to include known MIDI envelope filters that adjust attack, decay, sustain and release of MIDI notes. One of ordinary skill in the art would have reasonably recognized that adding known MIDI envelope filters would allow for increased musical expression when converting EEG signals into MIDI musical notes. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 9 depends on claim 8 and further requires the following: “wherein the effects are related to a note threshold value of the sound.” Senn describes associating the brainwaves to a sound, such as a particular instrument, and to an effect, including velocity determination. Senn at ¶¶ 109, 161, 162. A targeting module 1440 then determines if the resulting MIDI note sample is above a threshold that controls whether it is output. Id. at ¶ 181, FIG.14. For example, Senn also describes thresholding brainwaves for determining whether to send a new stimuli, or to retarget a note to a different MIDI channel. Id. at ¶ 185. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 10 depends on claim 8 and further requires the following: “wherein the effects are related to a pitch value of the sound.” Senn describes associating the brainwaves to a sound, such as a particular instrument, and to an effect, including pitch determination. Senn at ¶¶ 109, 156–160. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 11 depends on claim 8 and further requires the following: “wherein the effects are related to a velocity value of the sound.” Senn describes associating the brainwaves to a sound, such as a particular instrument, and to an effect, including velocity determination. Senn at ¶¶ 109, 161, 162. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 12 depends on claim 8 and further requires the following: “further comprising, adjusting an attack, a decay, a sustain, and a release of the note.” Claim 13 depends on claim 8 and further requires the following: “further comprising, playing a predetermined sound for the user to invoke certain responses for predetermined brainwaves.” Claims 12 and 13 are treated together. Senn describes playing sounds in a feedback loop in order to invoke desired brainwave patterns. Senn at ¶¶ 113–116, 148, 194, FIG.5. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claims. Claim 14 depends on claim 8 and further requires the following: “further comprising, generating a visual representation of the brainwaves.” Senn describes generating visual representations of the brainwaves similarly to the way the brainwaves are rendered musically. Senn at ¶¶ 104, 163–167, FIGs.4, 12. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claim. Claim 18 depends on claim 16 and further requires the following: “further comprising manipulating an envelope of the sound.” Both claims 12 and 18 recite manipulating an envelope of sound/note, such as adjusting an attack, a decay, a sustain and a release. Senn similarly generates MIDI notes based on EEG signals. Senn describes adjusting timbre, however, does not describe applying any type of envelope filtering, such as attack, decay, sustain or release. Senn at ¶ 148. The Massad reference describes another system for generating MIDI music based on biometric signals. Massad at Abs., ¶¶ 106–111, 381, 395. Massad teaches and suggests that due to the use of MIDI, the instruments may be filtered by adjusting their envelope settings, including attack, decay and echo. Id. at ¶ 395. The prior art further teaches other MIDI envelope parameters, including the claimed sustain and release settings. MusicRadar article. Accordingly, it would have been obvious for one of ordinary skill in the art at the time of filing to have modified Senn’s system to include MIDI envelope filters that adjust attack, decay, sustain and release of MIDI notes. For the foregoing reasons, the combination of the Senn, the Massad and the MusicRadar references makes obvious all limitations of the claims. Summary Claims 1–3 and 5–21 are rejected under at least one of 35 U.S.C. §§ 102 and 103 as being unpatentable over the cited prior art. In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Response to Applicant’s Arguments Applicant’s Reply (30 April 2025) has substantively amended all the claims. This Office action has been updated accordingly. Applicant (Reply at 7–9) comments that the cited references do not describe, teach or suggest applying dynamic effects controlled by a brainwave. Applicant focuses entirely on the MusicRadar and the Massad references without addressing the base Senn reference. Regardless of whether MusicRadar and Massad describe the claimed concept at issue, the rejection relies entirely on Senn for teaching the claimed concept, particularly the teachings at ¶ 161, FIG.11 where Senn describes the use of a ratio between two brainwaves to control the velocity of a note selected in accord with the frequency of one brainwave. Applicant’s additional comments (Reply at 8–9) are general in nature, comparing the prior art to the “present invention” without mooring these comparisons to the actual claim language. Without a particular showing as to how the references, as applied, fail to describe, teach or suggest all limitations of the claims, Applicant has not demonstrated any persuasive reason to withdraw the rejections. For the foregoing reasons, Applicant has not persuasively established any error in the Office action. All the rejections will be maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WALTER F BRINEY III whose telephone number is (571)272-7513. The examiner can normally be reached M-F 8 am-4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Walter F Briney III/ Walter F Briney IIIPrimary ExaminerArt Unit 2692 2/6/2026 1 Pages 6–17 are excluded because they only contain ads.
Read full office action

Prosecution Timeline

Apr 23, 2022
Application Filed
Mar 08, 2024
Non-Final Rejection — §102, §103
Sep 06, 2024
Response Filed
Oct 29, 2024
Final Rejection — §102, §103
Apr 30, 2025
Request for Continued Examination
May 05, 2025
Response after Non-Final Action
Jul 09, 2025
Non-Final Rejection — §102, §103
Jan 12, 2026
Response Filed
Feb 06, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598444
Apparatus and Method for Rendering a Sound Scene Using Pipeline Stages
2y 5m to grant Granted Apr 07, 2026
Patent 12598442
AUTOMATIC LOUDSPEAKER DIRECTIVITY ADAPTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12598412
Sound Signal Processing Method and Headset Device
2y 5m to grant Granted Apr 07, 2026
Patent 12587791
SOUND-GENERATING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581245
LOUDSPEAKER
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
69%
With Interview (+3.8%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month