Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,764

RELOCATION OF SOUND COMPONENTS IN SPATIAL AUDIO CONTENT

Final Rejection §102§103
Filed
Sep 20, 2023
Examiner
KANG, ANNABELLE
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
63%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
12 granted / 15 resolved
+18.0% vs TC avg
Minimal -17% lift
Without
With
+-16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
24 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
33.5%
-6.5% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6, 8-14, and 17-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Laaksonen (KR 20210024598 A, hereinafter Laaksonen, claim rejections refers to the translated document). Regarding claim 1, Laaksonen teaches a method for relocating a sound component in a sound field of spatial audio content, comprising: identifying the sound component for relocation, comprising determining that the sound component is located within a center region in the sound field; (see [0009]-[0010], [0028]-[0029]: receiving, which is identifying, relocation signaling from a remote user. Spatial audio effect for relocating the peripheral perceptual direction based on the relocation signaling. Performing the rotation of one of the peripheral perception direction and speech perception direction around the center. Laaksonen shows supporting details about the above summary statement by providing additional description in [0125]-[0129]: Audio content may be rendered as spatial audio from perceived locations that lie on a notional circle or sphere around the user 200, as shown by circle 500. In other examples, in which the directional information includes distance information, then the perceived locations may be rendered from locations located at different distances.) separating the sound component from one or more non-central sound components in the spatial audio content; (see [0049]: separation between the speech perception direction and peripheral perception, or in other words, the center region and non-central sound components.) processing the sound component to relocate the sound component to a location in the sound field outside the center region in the sound field; (see [0049], [0094]-[0096]: spatial audio processing. Relocating peripheral perception, which is the surrounding perception of the center region) and integrating the relocated sound component with the one or more non-central sound components to provide integrated spatial audio content. (see Claim 1, Fig. 11: provide presentation, which is an integration, of audio content using modification of spatial audio effect for relocating speech perceptual direction to the reference position based on signaling. Non-central sound components such as 1105 are integrated to spatial audio content.) Regarding claim 2, Laaksonen teaches identifying the sound component for relocation further comprises determining that the sound component corresponds to a voice. (see [0038]-[0040]: user audio content for presentation of relocation signaling includes user voice audio ) Regarding claim 3, Laaksonen teaches identifying the sound component for relocation further comprises determining that the sound component is a voice of a predetermined user. (see [0038]-[0040], [0044]: including audio of the user's voice or remote user, which is already a predetermined user) Regarding claim 4, Laaksonen teaches identifying the sound component for relocation further comprises matching one or more characteristics of the sound component to one or more predetermined criteria. (see [0144]-[0145]: user relocation signaling based on a predetermined type of user input. "Based on" meaning that criteria must be matched.) Regarding claim 5, Laaksonen teaches the spatial audio content is a binaural audio recording. (see [0094], [0111]: spatial audio content includes head tracking binaural audio) Regarding claim 6, Laaksonen teaches identifying an additional sound component for relocation, comprising determining that the additional sound component is located within the center region in the sound field; separating the additional sound component from the sound component and the one or more non-central sound components; processing the additional sound component to relocate the additional sound component to an additional location in the sound field outside the center region in the sound field; and integrating the additional sound component with the sound component and the one or more non-central sound components to provide the integrated spatial audio content. (see [0026], [0030], [0047]-[0049], [0095]-[0098]: secondary, or 'additional' sound component which is "ambient" audio that is determined and separated for rearranging speech perceptual direction, then integrating to be presented of audio content in one) Regarding claim 8, Laaksonen teaches the location in the sound field outside the center region is in front of a center location in the sound field. (see [0117], [0147]: audio content associated with direction information with respect to a reference point, that being in a direction towards the user device maintained in front of it) Regarding claims 9-14, the claimed limitations are claims directly corresponding to the method claims 1-6; therefore, is rejected for the significant similar reasons as claim 1-6 as discussed above. Regarding claim 17, Laaksonen teaches determining audio context information is based on analyzing one or more control signals to generate the noise information or the voice information. (see [0096]: audio analysis techniques to include secondary audio including ambient audio including audio other than audio determined to have been generated by the voice of one or more remote users) Regarding claim 18, the claimed limitations are a method claim directly corresponding to the system claim 1; therefore, is rejected for the significant similar reasons as claim 1 as discussed above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7, 15-16, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Laaksonen (KR 20210024598 A, hereinafter Laaksonen, claim rejections refers to the translated document) in view of Kratz (US 20200236487 A1, hereinafter Kratz). Regarding claim 7, Laaksonen fail to teach determining motion information about a user; and causing an audio output device to output the integrated spatial audio content such that a sound field of the integrated spatial audio content is moved with respect to the user based on the motion information. However, Kratz teaches determining motion information about a user; and causing an audio output device to output the integrated spatial audio content such that a sound field of the integrated spatial audio content is moved with respect to the user based on the motion information. (see [0084]-[0085]: spatial audio panorama 520 according to the movements of the user 510 as well as movements recorded in activity) Laaksonen and Kratz are considered to be analogous to the claimed invention because both are in the field of spatial audio signal processing and immersive sound reproduction. It would have been obvious to a person of ordinary skill in the art to have chosen to apply the broad teachings of Kratz of determining motion information about a user to Laaksonen leading to a more enhanced and realistic experience with audio presentation. Regarding claim 15, Laaksonen fails to teach motion tracking circuitry communicably coupled to the processor and configured to determine motion information about the user, wherein the processor is further configured to output the relocated sound component and the non-central sound components via the audio output device such that the sound field is moved with respect to the user based on the motion information. However, Kratz teaches motion tracking circuitry communicably coupled to the processor and configured to determine motion information about the user, wherein the processor is further configured to output the relocated sound component and the non-central sound components via the audio output device such that the sound field is moved with respect to the user based on the motion information. (see [0085], [0093]: processing system 104 tracks user 610 and user movement in sound field environment) Laaksonen and Kratz are considered to be analogous to the claimed invention because both are in the field of spatial audio signal processing and immersive sound reproduction. It would have been obvious to a person of ordinary skill in the art to have chosen to apply the broad teachings of Kratz to identify a sound component output location in an image frame captured from a camera to Laaksonen in order to provide an extended reality experience to a user. Regarding claim 16, Laaksonen fails to teach a camera communicably coupled to the processor, wherein the processor is further configured to: identify a sound component output location in an image frame captured from the camera; and identify the location in the sound field outside the center region in the sound field based on a relationship between the sound component output location and the sound field. However, Kratz teaches a camera communicably coupled to the processor, wherein the processor is further configured to: identify a sound component output location in an image frame captured from the camera; and identify the location in the sound field outside the center region in the sound field based on a relationship between the sound component output location and the sound field. (see [0084]: 360° camera mounted on helmet - camera including directional microphone system that record a spatial audio panorama based on direction of user) Laaksonen and Kratz are considered to be analogous to the claimed invention because both are in the field of spatial audio signal processing and immersive sound reproduction. It would have been obvious to a person of ordinary skill in the art to have chosen to apply the broad teachings of Kratz of determining motion information about a user to Laaksonen leading to a more enhanced and realistic experience with audio presentation. Regarding claim 19, Laaksonen fails to teach determining motion information about a user; and outputting the sound component and the non-central sound components via the audio output device such that the sound field is moved with respect to the user based on the motion information. However, Kratz teaches teach determining motion information about a user; and outputting the sound component and the non-central sound components via the audio output device such that the sound field is moved with respect to the user based on the motion information. (see [0084]-[0085]: spatial audio panorama 520 according to the movements of the user 510 as well as movements recorded in activity) Laaksonen and Kratz are considered to be analogous to the claimed invention because both are in the field of spatial audio signal processing and immersive sound reproduction. It would have been obvious to a person of ordinary skill in the art to have chosen to apply the broad teachings of Kim to Kang in order to [in order to what]. Regarding claim 20, Laaksonen fails to teach processing the sound component based on the motion information such that the sound component is moved with respect to the user independently from the sound field. However, Kratz teaches teach processing the sound component based on the motion information such that the sound component is moved with respect to the user independently from the sound field. (see [0084]-[0085]: spatial audio panorama 520 according to the movements of the user 510 as well as movements recorded in activity, which is respect to the user) Laaksonen and Kratz are considered to be analogous to the claimed invention because both are in the field of spatial audio signal processing and immersive sound reproduction. It would have been obvious to a person of ordinary skill in the art to have chosen to apply the broad teachings of Kratz of determining motion information about a user to Laaksonen leading to a more enhanced and realistic experience with audio presentation. Response to Argument Applicant's arguments filed October 28, 205 have been fully considered but they are not persuasive. On page 6-7 of applicant’s remarks, applicant mainly argues that the art of record fails to disclose a sound component within a central region of a sound field, processing the separated sound component to relocate the sound component to a location outside the center region, and integrating the relocated sound component with non-central sound components. The Examiner disagrees and maintains as pointed out in the rejection above, Laaksonen clearly teaches a method for relocating a sound component in a sound field of spatial audio content, comprising: identifying the sound component for relocation, comprising determining that the sound component is located within a center region in the sound field; (see [0009]-[0010], [0028]-[0029]: receiving, which is identifying, relocation signaling from a remote user. Spatial audio effect for relocating the peripheral perceptual direction based on the relocation signaling. Performing the rotation of one of the peripheral perception direction and speech perception direction around the center. Laaksonen shows supporting details about the above summary statement by providing additional description in [0125]-[0129]: Audio content may be rendered as spatial audio from perceived locations that lie on a notional circle or sphere around the user 200, as shown by circle 500. In other examples, in which the directional information includes distance information, then the perceived locations may be rendered from locations located at different distances. As recited in the specification, it refers the center 206 and center region 208 which is clearly shown in [0129] of Laaksonen of the center 200 and center region 500 circularly.) separating the sound component from one or more non-central sound components in the spatial audio content; (see [0049]: separation between the speech perception direction and peripheral perception, or in other words, the center region and non-central sound components.) processing the sound component to relocate the sound component to a location in the sound field outside the center region in the sound field; (see [0049], [0094]-[0096]: spatial audio processing. Relocating peripheral perception, which is the surrounding perception of the center region) and integrating the relocated sound component with the one or more non-central sound components to provide integrated spatial audio content. (see Claim 1, Fig. 11: provide presentation, which is an integration, of audio content using modification of spatial audio effect for relocating speech perceptual direction to the reference position based on signaling. Non-central sound components such as 1105 are integrated to spatial audio content.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNABELLE KANG whose telephone number is (571)270-3403. The examiner can normally be reached Monday-Thursday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNABELLE KANG/Examiner, Art Unit 2695 /VIVIAN C CHIN/Supervisory Patent Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Dec 03, 2024
Response after Non-Final Action
Jul 25, 2025
Non-Final Rejection — §102, §103
Oct 28, 2025
Response Filed
Jan 30, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604141
ULTRA-LOW FREQUENCY SOUND COMPENSATION METHOD AND SYSTEM BASED ON HAPTIC FEEDBACK, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12581255
SYSTEMS AND METHODS FOR ASSESSING HEARING HEALTH BASED ON PERCEPTUAL PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12556868
Speaker
2y 5m to grant Granted Feb 17, 2026
Patent 12549895
DYNAMIC WIND DETECTION FOR ADAPTIVE NOISE CANCELLATION (ANC)
2y 5m to grant Granted Feb 10, 2026
Patent 12513372
AUDIO DATA PROCESSING METHOD, AUDIO DATA PROCESSING APPARATUS, COMPUTER READBLE STORAGE MEDIUM, AND ELECTRONIC DEVICE SUITABLE FOR STAGE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
63%
With Interview (-16.7%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month