Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,914

AUDIO BEAM STEERING, TRACKING AND AUDIO EFFECTS FOR AR/VR APPLICATIONS

Final Rejection §102§103
Filed
Mar 25, 2024
Examiner
HUBER, PAUL W
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
4 (Final)
85%
Grant Probability
Favorable
5-6
OA Rounds
2y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
929 granted / 1091 resolved
+23.2% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
36 currently pending
Career history
1127
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1091 resolved cases

Office Action

§102 §103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 10-17, and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lyren et al. (US 2024/0031474). Regarding claims 1 and 11, Lyren discloses a computer-implemented method and a system (see figures 30A-32C, for example), comprising: one or more processors 3974; and a memory 3970 storing instructions which, when executed by the one or more processors 3974, cause the system to perform the method (see fig. 39, for example), comprising: receiving, from an immersive reality application (see para. 0058, regarding “sound localization … in augmented reality (AR), virtual reality (VR), audio augmented reality (AAR)”), a first audio waveform from a first acoustic source (e.g., telephone call from Alice) and a second audio waveform from a second acoustic source (e.g., telephone call from Bob); placing the first acoustic source (e.g., Alice) in a first virtual position 3132 relative to a user 3130 of a client device 3100; placing the second acoustic source (e.g., Bob) in a second virtual position 3133 relative to the user 3130 of the client device 3100 (see fig. 31, for example, which illustrates virtual positions of connected calls of acoustic sources relative to a user); displaying a menu of acoustic sources selectable by the user, the menu including the first acoustic source and the second acoustic source; receiving a selection of at least one of the first acoustic source and the second acoustic source (see figs. 30A, 30D, and 32A, for example, wherein each figure teaches a menu of acoustic sources selectable by the user as claimed); applying a spatial audio filter (e.g., soundscape volume, voice volume) to an audio waveform associated with the selection based on user input to select one of a plurality of spatial audio filters (see fig. 32A and para. 0268, for example, regarding “user interfaces allow the user to adjust a soundscape or background sound level and a voice sound level for each of multiple other parties and sound sources being localized to the user. For example, a user can use the interface to adjust the controls in order to hear a low volume of soundscape from Alice (when Alice is in a noisy place) and a high volume level of soundscape for Bob (when Bob is at a place with background sounds that the user desires to hear). Further, the sound levels of the backgrounds and voices of the various sound sources on a call can be adjusted relative to each other”); and providing, to a speaker of the client device, an audio signal including the audio waveform associated with the selection (see fig. 32A, for example, wherein the user can select a spatial audio ‘soundscape’ filter to attenuate the background sounds in the audio waveform associated with Alice, or the user can select a spatial audio ‘voice’ filter to attenuate the voice volume in the audio waveform associated with Alice). Regarding claims 2 and 12, the immersive reality application is a podcast, and wherein the first acoustic source includes a first participant (e.g., Alice) of the podcase and the second acoustic source includes a second participant (e.g., Bob) of the podcase. See para. 0217, regarding “the voice can originate from various sources or software applications include, but not limited to, a telephone call, streaming audio, audio archive (such as a recorded radio show, podcast, or recorded music), …or voices or sounds from another source.” Regarding claims 3, 4 and 13, the client device is a headset. The headset comprises a display in at least one eyepiece. See para. 0086, regarding “a person or a user can select one or more SLPs that provide a location where sound will localize to the person. As one example, the person selects a location for where to externally localize sound through interaction with a UI or a display of an electronic device, such as a smartphone, a head mounted display, or an optical head mounted display.” See also, para. 0101, regarding “wear[ing] a head mounted display (HMD) and headphones that provide a virtual world…’ Regarding claims 5 and 14, the first virtual position 3132 is different from the second virtual position 3133. See fig. 31, for example. Regarding claims 6, 7, 15, and 16, placing the first acoustic source (e.g., Alice) in the first virtual position 3132 comprises placing a first virtual representation of the first acoustic source (e.g., picture of Alice) in a virtual scene. Placing the second acoustic source (e.g., Bob) in the second virtual position 3133 comprises placing a second virtual representation of the second acoustic source (picture of Bob) in the virtual scene. See para. 0198, regarding “a picture of Alice appears at the location on the user interface where her voice localizes to the user”. Regarding claims 8 and 17, a first direction of the first acoustic source (e.g., Alice) relative to the client device 3100 is identified based on the first virtual position 3132 (for example, Alice’s voice appears to be coming from the left direction relative to the client device 3100). A second direction of the second acoustic source (e.g., Bob) relative to the client device 3100 is identified based on the second virtual position 3133 (for example, Bob’s voice appears to be coming from the right direction relative to the client device 3100). Regarding claims 10 and 19, the audio signal includes at least one of the first audio waveform (e.g., telephone call from Alice) and the second audio waveform (e.g., telephone call from Bob). The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 9, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lyren et al. (US 2024/0031474), as applied to the claims above, in further view of Ivanov et al. (US 2018/0341455). Lyren discloses the invention as claimed, including displaying relative distances and directions of acoustic sources relative to the user wearing the client device (see figures 15 and 17, for example), but fails to specifically teach inserting the first audio waveform into the audio signal, the first audio waveform including a first time delay and a first amplitude based on the first direction of the first acoustic source, and inserting the second audio waveform into the audio signal, the second audio waveform including a second time delay and a second amplitude based on the second direction of the second acoustic source. Ivanov discloses a sound localization system in an immersive reality application for placing a plurality of acoustic sources in virtual positions relative to a user of a client device. Ivanov discloses that “the location of the visual representation of the apparent source within the captured scene could also be adjusted. In such an instance, the audio could include adjusted volume level and time delay to account for the change in location” (para. 0047). Ivanov discloses adjusting the volume level and time delay of the visual representation of the source, in the same field of endeavor, for the purpose of giving the user the impression that the volume level and time delay of the source matches the visual representation of the source in the virtual scene (e.g., an object a direction and distance from a user in a scene will have an audio amplitude and delay corresponding to that direction and distance in the scene). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to modify Lyren, in view of Ivanov, such that the method and system further includes inserting the first audio waveform into the audio signal, the first audio waveform including a first time delay and a first amplitude based on the first direction of the first acoustic source, and inserting the second audio waveform into the audio signal, the second audio waveform including a second time delay and a second amplitude based on the second direction of the second acoustic source. A practitioner in the art would have been motivated to do this for the purpose of purpose of giving the user the impression that the volume level and time delay of the source matches the visual representation of the source in the virtual scene (e.g., object 1751 in fig. 17 which has an audio amplitude and delay corresponding to the relative position of the object in the virtual scene). Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The reference cited on the PTO-892 discloses a method of externally localizing telephone calls with respect to a user wearing a device. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL W HUBER whose telephone number is (571)272-7588. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen, can be reached at telephone number 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. /PAUL W HUBER/Primary Examiner, Art Unit 2691 pwh February 27, 2026
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Oct 08, 2024
Non-Final Rejection — §102, §103
Jan 16, 2025
Response Filed
Apr 06, 2025
Final Rejection — §102, §103
Aug 11, 2025
Request for Continued Examination
Aug 12, 2025
Response after Non-Final Action
Aug 28, 2025
Non-Final Rejection — §102, §103
Dec 02, 2025
Response Filed
Feb 27, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604150
Method and System For Spatial Audio Processing Using Multiple Orders Of Ambisonics
2y 5m to grant Granted Apr 14, 2026
Patent 12593189
METHOD OF GENERATING VIBRATION FEEDBACK SIGNAL, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593159
MAGNETIC EARPHONES HOLDER
2y 5m to grant Granted Mar 31, 2026
Patent 12587803
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587804
LOCATION-AWARE NEURAL AUDIO PROCESSING IN CONTENT GENERATION SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
85%
Grant Probability
95%
With Interview (+9.5%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 1091 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month