Prosecution Insights
Last updated: April 19, 2026
Application No. 18/162,847

MODIFYING AUDIO FOR PRESENTATION TO A USER BASED ON A DETERMINED LOCATION OF AN AUDIO SYSTEM PRESENTING THE AUDIO

Final Rejection §102§103
Filed
Feb 01, 2023
Examiner
SUTHERS, DOUGLAS JOHN
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
87%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
598 granted / 783 resolved
+14.4% vs TC avg
Moderate +11% lift
Without
With
+11.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
16 currently pending
Career history
799
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
31.4%
-8.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 783 resolved cases

Office Action

§102 §103
DETAILED ACTION In the response to this office action, the examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the examiner in prosecuting this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 4, 9, 20, and 21 are objected to because of the following informalities: Claim 4 states ” wherein data describing the local area surrounding the audio system includes…” which should be “wherein the data describing the local area surrounding the audio system includes…” to make it clear it is referring to the data already mentioned. Claim 9 is objected in an analogous manner. Claim 20 states ”the method of claim 18, wherein data describing the local area surrounding the audio system includes a time when an audio system is surrounded by the local area”. This appears to me be missing something describing what is done with this data. The examiner’s best guess is this should be ”the method of claim 1, obtaining data describing the local area surrounding the audio system that includes a time when an audio system is surrounded by the local area”, or applicant may choose to include an intervening claim similar to claims 2 or 12 (claim 20 contains similar subject matter to claims 4 and 14, dependent on claims 2 and 12 respectively). Claim 21 states “The method of claim 1, wherein: the context information representative of at least one of” which would be better as “The method of claim 1, wherein: the context information is representative of at least one of”. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 8, 10, 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Boven et al. (US 20180154150 A1). Regarding claim 1, Boven discloses a method comprising: determining a location (paragraphs [0020], [0038] and [0078], "location sensing mechanism") of an audio system based on at least in part on a plurality of entries in a database (paragraph [0038], "a database of reference sound profiles 123", see figure 1, paragraphs [0020] and [0078]), the location being based on a semantic descriptor identifying the location (names of geolocation and sub-location such as “office” and “office desk” respectively, paragraph [0078]) and context information specific to the location (the sensed geolocation, paragraphs [0020], [0038] and [0078], "location sensing mechanism", “GPS”), the context information being different from the semantic descriptor; determining a location profile ("sound profile" paragraph [0078]) of the location based on the location that is determined (paragraph [0038], "a database of reference sound profiles 123", see figure 1, paragraphs [0020] and [0078] “In some examples of the system, each sound profile is associated with a stored geolocation, the system further comprises a location sensing mechanism in communication with the auditory device processor, and when the processor compares the frequency content of the environmental sound with the sound profiles stored in the database and, in response to the comparison, selects one of the sound profiles”, “For example, in the “office” geolocation, the processor may be restricted to choosing between the (i) office desk, (ii) office conference room, and (iii) office cafeteria, settings”, see sound profiles as in user’s home example) and a plurality of location profiles (entire database, looks through multiple entries, paragraph [0078], “Each sound profile may also be associated with a stored geolocation”, also may have further location setting specifications such as office locations i-iii); modifying audio content for presentation by one or more transducers of the audio system based on acoustic parameters included in the location profile (applying a sound profile matching settings optimized for the user, see paragraph [0076] for content modified by profile 123, figure 2 parameter settings 222, "parameter settings associated with the sound profile", see also paragraphs [0019], [0072], and [0076]-[0077]); and presenting the audio content that is modified to a user via a transducer array included in the audio system (speakers of a hearing aid, see also paragraph [0073]). Regarding claim 8, Boven discloses wherein modifying the audio content for presentation by one or more transducers of the audio system based on the one or more acoustic parameters associated with the location profile comprises: removing audio having one or more characteristics specified by the one or more acoustic parameters (noise rejection/cancellation, paragraphs [0010], [0019], and [0076]). Regarding claim 10, Boven discloses wherein the audio content for presentation by the one or more transducers of the audio system comprises audio captured from a local area by one or more sensors of the audio system (sound left after noise rejection/cancellation, paragraphs [0010], [0019], and [0076]). Claim 18 is rejected in an analogous manner to claim 1 given the non-transitory computer readable medium of paragraph [0066]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 20 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boven et al. (US 20180154150 A1) in view of Dittberner et al. (US 20150172831 A1) (provided at request of applicant from contesting official notice). Regarding claim 20, Boven does not expressly disclose using time data. Dittberner discloses determining a sound profile (via parameter map 40 and environment classifier 38, see at least paragraphs [0159] to [0160], and [0171]) based on an environment data including time data (paragraph [0129], notice also uses location). It would have been obvious to a person of ordinary skill in the art to use the time data of Dittberner in the system of Boven and Chen for the benefit of profiling busy and slow business times or populated and unpopulated areas differently, thereby modeling the situation better. Therefore, it would have been obvious to combine Dittberner with Boven and Chen, for the benefits above, to obtain the invention as specified in claim 20. Regarding claim 21, Boven does not expressly disclose using time data. Dittberner discloses determining a sound profile (via parameter map 40 and environment classifier 38, see at least paragraphs [0159] to [0160], and [0171]) based on an environment data including context information representative of at least one of characteristics of at least an audio signal captured in association with the location, a time of day when a signal associated with the location was obtained (paragraph [0129], notice also uses location), and an image of the location. It would have been obvious to a person of ordinary skill in the art to use the time data of Dittberner in the system of Boven and Chen for the benefit of profiling busy and slow business times or populated and unpopulated areas differently, thereby modeling the situation better. Therefore, it would have been obvious to combine Dittberner with Boven and Chen, for the benefits above, to obtain the invention as specified in claim 21. Claim(s) 11, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boven et al. (US 20180154150 A1) in view of Chen et al. (US 20220070585 A1) and Richards et al. (US 20170363865 A1). Regarding claim 11, Boven discloses an headset (hearing aid of paragraph [0018]) comprising: an audio system including a transducer array configured to present audio (speaker of hearing aid), a sensor array (microphone of hearing aid, 104 of figure 1, paragraph [0018]) configured to capture audio from a local area including the headset, and an audio controller (106 of figure 1), the audio controller including a processor and a non-transitory computer readable storage medium (paragraph [0066]) having stored instructions that, when executed by the processor, cause the audio system to: determine a location (paragraphs [0020], [0038] and [0078], "location sensing mechanism") of an audio system based on at least in part on a plurality of entries in a database (paragraph [0038], "a database of reference sound profiles 123", see figure 1, paragraphs [0020] and [0078]), the location being based on a semantic descriptor identifying the location (names of geolocation and sub-location such as “office” and “office desk” respectively, paragraph [0078]) and context information specific to the location (the sensed geolocation, paragraphs [0020], [0038] and [0078], "location sensing mechanism", “GPS”), the context information being different from the semantic descriptor; determine a location profile ("sound profile" paragraph [0078]) of the location based on the location that is determined (paragraph [0038], "a database of reference sound profiles 123", see figure 1, paragraphs [0020] and [0078] “In some examples of the system, each sound profile is associated with a stored geolocation, the system further comprises a location sensing mechanism in communication with the auditory device processor, and when the processor compares the frequency content of the environmental sound with the sound profiles stored in the database and, in response to the comparison, selects one of the sound profiles”, “For example, in the “office” geolocation, the processor may be restricted to choosing between the (i) office desk, (ii) office conference room, and (iii) office cafeteria, settings”, see sound profiles as in user’s home example) and a plurality of location profiles (entire database, looks through multiple entries, paragraph [0078], “Each sound profile may also be associated with a stored geolocation”, also may have further location setting specifications such as office locations i-iii); modify audio content for presentation by one or more transducers of the audio system based on acoustic parameters included in the location profile (applying a sound profile matching settings optimized for the user, see paragraph [0076] for content modified by profile 123, figure 2 parameter settings 222, "parameter settings associated with the sound profile", see also paragraphs [0019], [0072], and [0076]-[0077]); and present the audio content that is modified to a user via a transducer array included in the audio system (speakers of a hearing aid, see also paragraph [0073]). Boven does not expressly disclose the claimed frame and display elements. Richards discloses a headset (see figure 2A) comprising: a frame (figure 2A items 220A-E); one or more display elements (115 of figure 2B) coupled to the frame, each display element configured to generate image light (paragraph [0003]); and an audio system (paragraph [0018]). It would have been obvious before the effective filing date of the claimed invention to a person of ordinary skill in the art to use the frame and display headset elements of Richards in the system of Boven for the benefit of allowing a virtual video experience along with the audio. Therefore, it would have been obvious to combine Richards with Boven to obtain the invention as specified in claim 11. Regarding claim 16, Boven discloses wherein the stored instructions to modify the audio content for presentation by the one or more transducers of the audio system based on the one or more acoustic parameters associated with the location profile further comprises stored instruction that when executed cause the audio system to: enhance audio from one or more sound sources captured by the sensor array of the audio system relative to audio from other sound sources captured by the sensor array (sound left after noise rejection/cancellation, paragraphs [0010], [0019], and [0076], also see noise removed from speech, see paragraphs [0072] and [0083]). Claim 17 is rejected in an analogous manner to claim 8. Allowable Subject Matter Claims 2-6, 9, 12-15, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed January 2nd, 2026 have been fully considered but they are not persuasive. In general applicant argues that the amendments to the previous claims are not found in prior art (see applicant’s arguments dated January 2nd, 2026, see starting page 11). A new ground of rejection has been established as above with reconsideration of the prior art references. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS JOHN SUTHERS whose telephone number is (571)272-0563. The examiner can normally be reached M-F, 8 am -5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOUGLAS J SUTHERS/ Examiner, Art Unit 2695 /VIVIAN C CHIN/ Supervisory Patent Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
Dec 14, 2024
Non-Final Rejection — §102, §103
Mar 17, 2025
Response Filed
Jun 19, 2025
Final Rejection — §102, §103
Aug 25, 2025
Applicant Interview (Telephonic)
Aug 25, 2025
Examiner Interview Summary
Sep 22, 2025
Request for Continued Examination
Sep 23, 2025
Response after Non-Final Action
Sep 27, 2025
Non-Final Rejection — §102, §103
Jan 02, 2026
Response Filed
Mar 15, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597410
ACTIVE NOISE CONTROL CIRCUIT WITH MULTIPLE FILTERS CONNECTED IN PARALLEL FASHION AND ASSOCIATED METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12581261
SOUND PROCESSING SYSTEM AND SOUND PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12574697
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12573364
AUDIO DATA PROCESSING METHOD, ELECTRONIC DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12573366
METHOD AND APPARATUS FOR OCCUPANT-BASED NOISE CANCELLATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
87%
With Interview (+11.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 783 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month