Prosecution Insights
Last updated: April 19, 2026
Application No. 18/756,883

CUSTOMIZED AUDIO MODIFICATION TOOL

Non-Final OA §102§103
Filed
Jun 27, 2024
Examiner
ANWAH, OLISA
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
1036 granted / 1162 resolved
+27.2% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
38 currently pending
Career history
1200
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1162 resolved cases

Office Action

§102 §103
DETAILED ACTION Information Disclosure Statement 1. The information disclosure statement submitted on 07/17/2025 is being considered by the examiner. Claim Rejections - 35 USC § 102 2. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 3. Claims 1-4, 6, 7, 9-13 and 17-19 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Selig et al, U.S. Patent Application Publication No. 2014/0309549 (hereinafter Selig). Regarding claim 1, Selig discloses a method (from paragraph 0097, see The systems and methods), comprising: determining a customized configuration of settings for an audio processor based on results of a hearing test, where the results indicate hearing loss experienced by a user (from paragraph 0011, see The first method S100 functions to test a user's hearing with various tone- or music-based audio signals, wherein each audio signal is a unique tone or set of tones and is associated with a (distinct) hearing frequency band, in order to generate a baseline hearing profile of the user based on the results of the hearing test); adjusting the audio processor settings based on the customized configuration (from paragraph 0011, see This baseline hearing test implemented by the first method S100, can thus enable front-end calibration and/or generation of a user's hearing profile, which can be implemented in various scenarios to augment the user's hearing); receiving an audio content (from paragraph 0011, see audio signals); adjusting playback of the audio content based on the customized configuration of audio processor settings (from paragraph 0011, see the hearing profile can be implemented by an application executing on a mobile computing device (e.g., a smartphone) to output augmented audio signals tailored to the user's hearing needs); and providing the adjusted audio content to a sound interface for output by a speaker (from paragraph 0047, see outputs the adjusted local sounds through the headset or headphones worn by the user to augment the user's hearing experience). Regarding claim 2, Selig discloses the method of claim 1, wherein prior to receiving the results of the hearing test: providing the hearing test (from paragraph 0011, see The first method S100 functions to test a user's hearing with various tone- or music-based audio signals); receiving user responses in association with the hearing test (from paragraph 0011, see records a user's responses to those sound signals); and determining the results based on the user responses (from paragraph 0011, see processes the user's responses in light of the musical sound signals to create a map of the user's hearing ability and/or to identify the user's hearing needs). Regarding claim 3, Selig discloses the method of claim 2, wherein the results represent a hearing threshold level of each or a combination of the user's ears for audio output across various frequency bands and at different intensity levels (from paragraph 0083, see Once a map of the data points of the user's hearing ability is generated, Block S230 can implement non-parametric methods or cooperate with Block S240 to implement parametric methods to assign a hearing model to the user. In one implementation, Block S230 accesses a set of previous hearing tests and compares the data points map of the user's hearing ability to a the previous hearing tests in the set. For example, each previous hearing test in the set can include an audiogram of a patient, each audiogram defining a hearing ability of the corresponding patient in the form of sound intensity (e.g., measured in decibels) versus frequency (i.e., in Hertz) for across frequencies in the audible range, in a sub-range of the audible range including fundamental frequencies of speech, and/or in a sub-range of the audible range including fundamental frequencies of music, etc. In particular, Block S230 can access from a database or generate automatically a fingerprint for each audiograms in the set, wherein a fingerprint of an audiogram specifies an absolute or relative sound intensity for each frequency tested in Blocks S210, S212, S213, etc. For example, Block S230 can generate (or access) a fingerprint for an audiogram that defines a baseline sound intensity as the sound intensity in the audiogram at a frequency corresponding to the frequency of the baseline volume adjustment setting in the user's hearing ability map. In this example, the fingerprint of the audiogram can further specify sound intensities relative to the baseline sound intensity at the remaining frequencies tested in Blocks S210, S212, S213, etc). Regarding claim 4, Selig discloses the method of claim 1, wherein: the results indicate hearing loss experienced by the user at a first frequency band; determining the customized configuration comprises determining a first adjusted level of intensity of the first frequency band that compensates for the hearing loss experienced by the user; and adjusting the audio processor settings comprises setting an intensity level of the first frequency band to the first adjusted level of intensity (from paragraph 0084, see Block S230 can thus compare the user's hearing ability map to audiogram fingerprints to substantially match absolute or relative volume adjustments by the user to absolute or relative sound intensities in a single audiogram (or a set of audiograms relevant to the user and averaged or otherwise combined according to one or more trends into a single composite audiogram) at the frequencies tested in Blocks S210, S212, S213, etc., as shown in FIG. 6. In particular, a volume adjustment set by the user and normalized for the baseline volume adjustment setting can define a minimum audible threshold volume at a corresponding frequency for the user, and a normalized sound intensity defined in the audiogram can similarly define a define minimum audible threshold volume at a corresponding frequency for the patient. Block S230 can therefore select a particular audiogram--from the set of audiograms--that defines minimum audible threshold volumes of a previous patient that best match minimum audible threshold volumes for the user tested at select frequencies in Blocks S210, S212, S213, etc. Block S230 can then pass this particular audiogram, including the sound intensities relative to frequencies for the corresponding patient, to Block S240 for implementation in generating the hearing profile for the user). Regarding claim 6, Selig discloses the method of claim 4, wherein: the results indicate additional hearing loss experienced by the user at a second frequency band; determining the customized configuration comprises determining a second adjusted level of intensity of the second frequency band that compensates for the additional hearing loss experienced by the user; and adjusting the audio processor settings comprises setting the intensity level of the second frequency band to the second adjusted level of intensity (from paragraph 0075, see Block S220 of the second method S200 recites recording a first volume adjustment for the first audible tone (or for the first set of audible tones) by the user. Block S222 of the second method S200 similarly recites recording a second volume adjustment for the second audible tone (or for the second set of audible tones) by the user. As shown in FIG. 5, one variation of the second method S200 can also include Block S223, which similarly recites recording a third volume adjustment for the third audible tone by the user. Generally, Block S220, S222, S223, etc. function to record volume adjustments made by the user during playback of the first audible tone, the second audible tone, the third audible tone, respectively, etc., which can be correlated the use's ability to hear (i.e., audibly discern) corresponding frequencies in the audible range, as shown in FIG. 6). Regarding claim 7, Selig discloses the method of claim 1, wherein: determining the customized configuration comprises determining: a maximum intensity threshold of each of a plurality of frequency bands; and a minimum intensity threshold of each of the plurality of frequency bands; and adjusting the audio processor settings comprises, for each of the plurality of frequency bands, compressing an amplitude based on the maximum intensity threshold and the minimum intensity threshold of the frequency band (from paragraph 0076, see Block S220 of the second method S200 recites recording a first volume adjustment for the first audible tone (or for the first set of audible tones) by the user. Block S222 of the second method S200 similarly recites recording a second volume adjustment for the second audible tone (or for the second set of audible tones) by the user. As shown in FIG. 5, one variation of the second method S200 can also include Block S223, which similarly recites recording a third volume adjustment for the third audible tone by the user. Generally, Block S220, S222, S223, etc. function to record volume adjustments made by the user during playback of the first audible tone, the second audible tone, the third audible tone, respectively, etc., which can be correlated the use's ability to hear (i.e., audibly discern) corresponding frequencies in the audible range, as shown in FIG. 6). Regarding claim 9, Selig discloses the method of claim 1, further comprising: providing an option for uploading the results of the hearing test; and in response to a selection of the option, receiving the results (from paragraph 0070, see As shown in FIG. 5, one variation of the second method S200 includes Block S284, which recites retrieving a demographic of the user from a computer network system and selecting the first set of distinct audible tones and the second set of distinct audible tones based on the demographic of the user. Generally, Block S284 implements methods and/or techniques of Block S184 described above to select particular tones, frequencies, and/or frequencies ranges to test in Block S210, S212, and/or S213, etc. For example, Block S284 can extract an age, gender, location, ethnicity, and/or occupation of the user from a social networking system, access hearing test results of other users or patients of sharing one or more demographic with the user, and adjust hearing test parameters according to trends in hearing tests of the other users. In this example, once demographic data is collected for the user, Block S284 can further access sets of audio tones from a database, each audio tone set associated with a different demographic (e.g., age group and/or occupation), and Block S284 can then match the user to a one or more particular audio tone sets in the database. Blocks S210, S212, and S213, etc. can then implement a selected audio tone set accordingly). Regarding claim 10, Selig discloses a system (from paragraph 0097, see The systems and methods), comprising: a processing system (from paragraph 0097, see computer-executable component can be a processor); and memory (from paragraph 0097, see The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device) storing instructions that, when executed, cause the system to perform operations comprising: receiving an indication of a selection to customize settings of an audio processor (from paragraph 0013, see As shown in FIGS. 1 and 4, Block S110 of the first method S100 recites identifying an audio output device worn or in use by a user); providing a hearing test to a user (from paragraph 0011, see The first method S100 functions to test a user's hearing); receiving user responses in association with the hearing test (from paragraph 0011, see records a user's responses to those sound signals); determining results of the hearing test based on the user responses, wherein the results indicate hearing loss experienced by the user (from paragraph 0011, see processes the user's responses in light of the musical sound signals to create a map of the user's hearing ability and/or to identify the user's hearing needs); determining a customized configuration of the settings based on the results (from paragraph 0011, see This baseline hearing test implemented by the first method S100, can thus enable front-end calibration and/or generation of a user's hearing profile); and adjusting the settings based on the customized configuration (from paragraph 0011, see the hearing profile can be implemented by an application executing on a mobile computing device (e.g., a smartphone) to output augmented audio signals tailored to the user's hearing needs). Regarding claim 11, Selig discloses the system of claim 10, wherein the operations further comprise: receiving, by the audio processor, audio content (from paragraph 0047, see wherein the hearing application receives local sounds through a microphone incorporated in the mobile device, handset, or headphones); adjusting, by the audio processor, playback of the audio content based on the customized configuration of the settings (from paragraph 0047, see adjusts the local sounds according to the user's hearing profile); and providing, by the audio processor to a sound interface, adjusted audio content for output by a speaker (from paragraph 0047, see outputs the adjusted local sounds through the headset or headphones worn by the user to augment the user's hearing experience). Regarding claim 12, Selig discloses the system of claim 10, wherein: the results represent a hearing threshold level of each or a combination of the user's ears for audio output across various frequency bands and at different intensity levels; and the results indicate a low hearing threshold level experienced by the user at a first frequency band of the various frequency bands (from paragraph 0076, see In one implementation, Block S210 outputs the first audible initially at a minimum or "0" volume setting (i.e., at an inaudible or "0" volume level), and Block S250 displays a command on the user's mobile computing device to increase the volume setting of the mobile computing device (or the native application executing the second method S200) until the first audible tone (or all tones in the set of audible tones in the first frequency sub-range) is heard. Thus, as the user increases the volume setting of the mobile computing device, Block S220 can record the final volume adjustment set by the user. In one example, Block S250 can also prompt the user to confirm a user-set volume setting, such as by selecting a "Next" button rendered on the display, and Block S220 can capture the current volume setting when the "Next" button is selected by the user. Block S220 can thus store this volume setting for the first audible tone as a first volume adjustment, which can indicate a minimum audible threshold volume of the first frequency for the user. Block S220 can also calculate and store a difference between the initial volume of the first audible tone and the final volume adjustment entered by the user, such as in decibel change or absolute or relative (e.g., percentage) increase in peak, average, or continuous power output to drive the audio output device during playback of the first audible tone). Regarding claim 13, Selig discloses the system of claim 12, wherein: the customized configuration comprises an intensity level of the first frequency band set at a first adjusted level of intensity that compensates for the low hearing threshold level experienced by the user (from paragraph 0079, see Once an output profile of a related audio output device is selected in Block S224, Block S220 can normalize a volume adjustment entered by the user according to the output profile of the audio output device. In particular, Block S220 can normalize the first volume adjustment to a standard volume that is substantially consistent across a set of computing devices and/or audio output devices, etc. Blocks S222, S223, etc. can implement similar methods or techniques to normalize volume settings entered by the user for the second volume adjustment, the third volume adjustment, etc). Regarding claim 16, Selig discloses the system of claim 10, wherein: the customized configuration comprises at least one of: a maximum intensity threshold of the first frequency band; or a minimum intensity threshold of the first frequency band; and adjusting the settings comprises compressing a dynamic range of intensity of the first frequency band via setting at least one of: a first compression level of the intensity of the first frequency band to the maximum intensity threshold; or a second compression level of the intensity of the first frequency band to the minimum intensity threshold. Regarding claim 17, Selig discloses a device (from paragraph 0011, see mobile computing device (e.g., a smartphone)), comprising: an audio processor (from paragraph 0097, see computer-executable component can be a processor); a processing system (from paragraph 0097, see The systems and methods); and memory (from paragraph 0097, see The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device) storing instructions that, when executed, cause the device to perform operations comprising: receiving an indication of a selection to customize settings of the audio processor (from paragraph 0013, see As shown in FIGS. 1 and 4, Block S110 of the first method S100 recites identifying an audio output device worn or in use by a user); receiving results of a hearing test, wherein the results indicate a hearing threshold level experienced by at least one ear of the user (from paragraph 0011, see processes the user's responses in light of the musical sound signals to create a map of the user's hearing ability and/or to identify the user's hearing needs); determining a customized configuration of the settings based on the results (from paragraph 0011, see This baseline hearing test implemented by the first method S100, can thus enable front-end calibration and/or generation of a user's hearing profile, which can be implemented in various scenarios to augment the user's hearing); adjusting the settings based on the customized configuration (from paragraph 0011, see In one example implementation, the hearing profile can be leveraged by an audiologist to customize a hearing aid for the user); receiving audio content (from paragraph 0011, see audio signals); providing the audio content to the audio processor to adjust output of the audio content based on the customized configuration of settings (from paragraph 0047, see In another example implementation, the hearing profile can be implemented as an augmented hearing application that executes on a smartphone, tablet or other mobile device of the user, wherein the hearing application receives local sounds through a microphone incorporated in the mobile device, handset, or headphones, adjusts the local sounds according to the user's hearing profile); receiving, from the audio processor, adjusted audio content (from paragraph 0011, see augmented audio signals tailored to the user's hearing needs); and providing the adjusted audio content to a sound interface for output by a speaker (from paragraph 0047, see outputs the adjusted local sounds through the headset or headphones worn by the user to augment the user's hearing experience). Claim 18 is rejected for the same reasons as claim 2. Regarding claim 19, Selig discloses the device of claim 17, wherein determining the customized configuration comprises performing at least one of: Equalization (from paragraph 0091, see In a similar implementation, Block S240 can transform the particular hearing test selecting in Block S230 into an equalizer (EQ) setting or audio engine parameter for the corresponding audio output device, the connected computing device, and/or the combination of the audio output device with the connected computing device. For example, in this implementation, Block S240 can generate an audio engine parameter (e.g., an equalizer setting) that boosts frequencies projected as difficult for the user to hear (i.e., are associated with high audible threshold volumes) and that attenuates (or does not change) frequencies at which the user hears normally (e.g., frequencies at which the user does not exhibit substantive hearing loss); frequency range transposition; dynamic range compression; or audio multiband compression. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Selig in view of Morris et al, U.S. Patent No. 11,902,747 (hereinafter Morris). Although Selig discloses the customized configuration comprises at least one of: a maximum intensity threshold of the first frequency band or a minimum intensity threshold of the first frequency band (from paragraph 0084, see In particular, a volume adjustment set by the user and normalized for the baseline volume adjustment setting can define a minimum audible threshold volume at a corresponding frequency for the user, and a normalized sound intensity defined in the audiogram can similarly define a define minimum audible threshold volume at a corresponding frequency for the patient. Block S230 can therefore select a particular audiogram--from the set of audiograms--that defines minimum audible threshold volumes of a previous patient that best match minimum audible threshold volumes for the user tested at select frequencies in Blocks S210, S212, S213, etc. Block S230 can then pass this particular audiogram, including the sound intensities relative to frequencies for the corresponding patient, to Block S240 for implementation in generating the hearing profile for the user), Selig does not teach adjusting the settings comprises compressing a dynamic range of intensity of the first frequency band via setting at least one of: a first compression level of the intensity of the first frequency band to the maximum intensity threshold; or a second compression level of the intensity of the first frequency band to the minimum intensity threshold. All the same, Morris discloses adjusting the settings comprises compressing a dynamic range of intensity of the first frequency band via setting at least one of: a first compression level of the intensity of the first frequency band to the maximum intensity threshold; or a second compression level of the intensity of the first frequency band to the minimum intensity threshold (from column 5, see Some hearing aids apply a non-linear, frequency-dependent gain to the incoming sound so as to “fit” the output sound to the hearing profile of the wearer. For example, if a wearer has significant hearing loss in higher frequencies and much less hearing loss in lower frequencies, then, for the same input volumes, the hearing aid may apply more gain to higher frequency sounds than lower frequency sounds to equalize, in effect, the audibility or perceived loudness of different sounds across frequencies. Additionally, because those with hearing loss typically have a narrow range of volumes at which they can comfortably hear (a reduced “dynamic range”), some hearing aids apply more gain to quiet sounds and less gain to louder sounds, in effect “compressing” the original signal into the dynamic range of the wearer. These techniques are sometimes referred to as wide-dynamic range compression (WDRC)). Therefore, it would have been obvious to one of ordinary skill in the art to modify Seig wherein adjusting the settings comprises compressing a dynamic range of intensity of the first frequency band via setting at least one of: a first compression level of the intensity of the first frequency band to the maximum intensity threshold; or a second compression level of the intensity of the first frequency band to the minimum intensity threshold as taught by Morris. This modification would have improved comfort by providing more gain to quiet sounds and less gain to louder sounds as suggested by Morris. 6. Claims 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Selig in view of McCoy et al, U.S. Patent Application Publication No. 2024/0024781 (hereinafter McCoy). Regarding claim 8, although Selig discloses the results indicate hearing loss experienced by the user at a first frequency band and determining the customized configuration comprises determining a second frequency band at which the result indicate hearing loss experienced by the user is less than at the first frequency band (from paragraph 0011, see The first method S100 functions to test a user's hearing with various tone- or music-based audio signals, wherein each audio signal is a unique tone or set of tones and is associated with a (distinct) hearing frequency band, in order to generate a baseline hearing profile of the user based on the results of the hearing test), Selig does not explicitly teach adjusting the audio processor settings comprises shifting the first frequency band to the second frequency band. All the same, McCoy discloses adjusting the audio processor settings comprises shifting the first frequency band to the second frequency band (from paragraph 0101, see Process 920 may include displaying graphical element 925 for frequency ranges of audio output, including at least one frequency range, such as frequency component 926. Process 920 can allow a user to specify frequency ranges to avoid, such as a frequency range where user has hearing loss, that cause vibration, or that is a disturbance/unpleasant. Process 920 may also allow for shifting the frequency of sound effects of the game so that they are audible, output frequency increase in volume to indicate presence of gaming opportunities or threats, shifting or volume or pitch based on review of musical accompaniment and sound effects, and/or changes of pitch or frequency may be based on user preferences and/or reactions to sounds. Process 920 also allows for use of a visual indicator that can be displayed to indicated information that the AI uses to generate music such as proximity to gaming elements). Therefore, it would have been obvious to one of ordinary skill in the art to modify Selig with adjusting the audio processor settings comprises shifting the first frequency band to the second frequency band as taught by McCoy. This modification would have improved comfort by providing sounds that are more pleasant as suggested by McCoy. Claim 15 is rejected for the same reasons as claim 8. Allowable Subject Matter 7. Claims 5, 14 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. Olisa Anwah Patent Examiner January 23, 2026 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692 /OLISA ANWAH/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604130
HEARING DEVICE WITH A BLEEDING CIRCUIT FOR DELIVERING MESSAGES TO A CHARGING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598710
Terminal Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597251
VIDEO FRAMING BASED ON TRACKED CHARACTERISTICS OF MEETING PARTICIPANTS
2y 5m to grant Granted Apr 07, 2026
Patent 12596515
FIRST DEVICE, COMMUNICATION SERVER, SECOND DEVICE AND METHODS IN A COMMUNICATIONS NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12598437
EARPHONES AND EARPHONE SYSTEM
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+4.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month