Prosecution Insights
Last updated: April 19, 2026
Application No. 18/094,985

CONTENT SHARING USING SOUND-BASED LOCATIONS OF ELECTRONIC DEVICES

Final Rejection §103
Filed
Jan 09, 2023
Examiner
KRZYSTAN, ALEXANDER J
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
913 granted / 1121 resolved
+19.4% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
1159
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1121 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Comments Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9,11-18,20-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mullins (US 20180332335 A1), and further in view of Wang et al (US 20220091244 A1). As per claim 1, Mullins discloses a method, comprising: determining, by the electronic device and based on the patterned audio output/audio output, a location of the electronic device relative to the other electronic device (para. 42: Therefore, different virtual display settings (e.g., number of virtual displays and positions relative to the computer display 102) can be configured based on the location of the AR device 114 and the computer display 102, where the electronic device must determine the location of the electronic device (ar headset) relative to the other electronic device (monitor) in order to position the virtual displays relative to the computer display as shown in fig. 10. Additionally, the AR headset/electronic device requires determination of the relative location of any virtual object relative to the real world and relative to the headset itself, in order to perform a 3d rendering of said object, for the purpose of producing a 3d rendered object relative to both the headset and also relative to the monitor 102); receiving display content from the other electronic device (para. 32: a display driver of the computer, the display driver configured to generate a first desktop area in the first physical display and to generate a second desktop area in the virtual display, the second desktop area being an expansion of the first desktop area), the display content based on the location of the electronic device relative to the other electronic device (as shown in fig. 10, where the position of the user, virtual object and physical monitor must all be accounted for in order to produce as shown in fig. 10, noting the image is a stereoscopic image which requires knowledge of the relative locations of the electric devices and all virtual images); and displaying the display content at the electronic device (the view of 104 in fig. 10 via the AR goggles). However, Mullins does not disclose: and storing second information indicating a spatial arrangement of a first speaker and a second speaker at an other electronic device, a first patterned audio output and a second patterned audio output from another the other electronic device; determining, by the electronic device based on the first patterned audio output, [[and]] the second patterned audio output, the stored first information indicating the predetermined audio pattern features, and the stored second information indicating the spatial arrangement of the first speaker and the second speaker of the other electronic device, a location of the other device. Wang discloses an AR system and teaches that device location can be determined by using audio output from multiple speakers on devices including (fig. 3): and storing second information indicating a spatial arrangement of a first speaker and a second speaker at an other electronic device (para 41, the other electronic device/user device can comprise speakers per the user devices in para 57, since the computing device can track the location of the devices with speakers with acoustic-based motion tracking and/or localization per para 57, the computing device requires information about the relative spatial positions of the multiple speakers in order to determine the device location, in addition the number of speakers is another indicator of spatial arrangement and is known per the speaker output based functions in para 57), a first patterned audio output and a second patterned audio output from another the other electronic device (each speaker transmits its own FMCW/patterned signal per para 57); determining, by the electronic device based on the first patterned audio output, [[and]] the second patterned audio output, the stored first information indicating the predetermined audio pattern features, and the stored second information indicating the spatial arrangement of the first speaker and the second speaker of the other electronic device, a location of the other device (para. 57: the 3D location of beacon 302, is the location of the other device since it is determined between the device and the other device ) . It would have been obvious to one skilled in the art at the time of filing to implement the acoustic processing using speakers and microphones as cited for the purpose of determining the device locations in the system of Mullins. As per claim 2, the method of claim 1, wherein the display content comprises an extension of display content displayed at the other electronic device (as shown in fig. 10 Mullins). As per claim 3, the method of claim 1, wherein the display content comprises at least a portion of a desktop view displayed at the other electronic device (as shown in fig. 10 Mullins, the display in 104 is part of the display field of 102). As per claim 4, the method of claim 3, further comprising providing, to the other electronic device, the location of the electronic device relative to the other electronic device (para. 24, the computer, which is part of the other electronic device, controls the virtual display configured as a second physical display, where since it is controlled by the computer, the computer must receive the relative locations of the electronic devices in order to position the virtual display as shown in fig. 10, Mullins). As per claim 5, the method of claim 1, wherein determining the location comprises determining an angular location of the electronic device relative to the other electronic device based on the stored second information indicating the spatial arrangement of the first speaker and the second speaker of the other electronic device and a difference between the received first patterned audio output and the received second patterned audio output (the embodiment of fig. 3 of Mullins notes the methods described herein are used per para 57, noting the time of arrival processing that is cited in a further embodiment in para 61 and 62,63 ) (noting the Using the calculated distances, processor 106 may calculate the 3D location of the speakers/angular location per para 63; processor 106 subtracts/difference the virtual time-of-arrival offset for the corresponding speaker from the time-of-arrival for the corresponding signal to obtain the distance; where the time of arrival offsets are part of the stored second information that indicate the spatial arrangement). As per claim 6, the method of claim 1, wherein receiving the audio output comprises receiving a first portion of the audio output at a first microphone of the electronic device and receiving a second portion of the audio output at a second microphone of the electronic device (para. 4 Wang via the microphones), and wherein determining the location comprises determining an angular location of the electronic device relative to the other electronic device (per the claim 5 rejection) based on a difference between the received first portion of the audio output and the received second portion of the audio output (the difference between the signals received by the microphones of the electronic device by definition is what determines the spatial locations of the electronic devices per the time of arrival processing per the claim 5 rejection). As per claim 7, the method of claim 1, further comprising: receiving a clock synchronization signal from the other electronic device at the electronic device (since the electronic devices are connected by a network per fig. 1 of Mullins, and the computer/other electronic device controls the virtual display that is displayed by the AR glasses, the devices must each receive clock sync signals in order to perform the disclosed functions), wherein determining the location comprises determining a distance from the electronic device to the other electronic device using the audio output and the clock synchronization signal (the system of the claim 1 rejections is a realtime adaptive system where the electronic devices are in sync in order to perform the cited functions, as such, all functions cited above, including the determining a distance step, are performed via the synchronized operation of both electronic devices, which requires the use of clock synchronization signals in order to keep the devices synchronized). As per claim 8, the method of claim 7, wherein receiving the clock synchronization signal comprises receiving the clock synchronization signal in a wireless electromagnetic signal (the AR headset is wireless and uses Bluetooth per para. 50 Mullins, which uses wireless EM signals, where all signaling, including the required clock sync signal, are EM). As per claim 9, Mullins discloses an AR headset with a computer and physical monitor (fig. 1), the electronic device in the system described below requires a non-transitory computer-readable medium storing instructions which, when executed by one or more processors, for the purpose of implementing the cited steps, cause the one or more processors to: receive, from another electronic device responsive to relative locations of the electronic devices display content for display at the electronic device ((para. 32: a display driver of the computer, the display driver configured to generate a first desktop area in the first physical display and to generate a second desktop area in the virtual display, the second desktop area being an expansion of the first desktop area) the display content based on a relative location of the electronic device relative to the other electronic device, ((para. 42: Therefore, different virtual display settings (e.g., number of virtual displays and positions relative to the computer display 102) can be configured based on the location of the AR device 114 and the computer display 102, where the electronic device must determine the location of the electronic device (ar headset) relative to the other electronic device (monitor) in order to position the virtual displays relative to the computer display as shown in fig. 10. Additionally, the AR headset/electronic device requires determination of the relative location of any virtual object relative to the real world and relative to the headset itself, in order to perform a 3d rendering of said object, for the purpose of producing a 3d rendered object relative to both the headset and also relative to the monitor 102); and display the display content at the electronic device at a location that corresponds to a location of additional display content displayed at the other electronic device (the view of 104 in fig. 10 via the AR goggles). However, Mullins does not disclose: Output first patterned audio from one or more speakers a first speaker of an electronic device; output second patterned audio from a second speaker of the electronic device, the second speaker spatially separated by a predetermined amount from the first speaker. Wang discloses an AR system and teaches that device location can be determined by using audio output from multiple speakers on devices including as shown (fig. 3): and storing second information indicating a spatial arrangement of a first speaker and a second speaker at an other electronic device (para 41, the other electronic device/user device can comprise speakers per the user devices in para 57, since the computing device can track the location of the devices with speakers with acoustic-based motion tracking and/or localization per para 57, the computing device requires information about the relative spatial positions of the multiple speakers in order to determine the device location, in addition the number of speakers is another indicator of spatial arrangement and is known per the speaker output based functions in para 57), a first patterned audio output and a second patterned audio output from another the other electronic device (each speaker transmits its own FMCW/patterned signal per para 57); determining, by the electronic device based on the first patterned audio output, [[and]] the second patterned audio output, the stored first information indicating the predetermined audio pattern features, and the stored second information indicating the spatial arrangement of the first speaker and the second speaker of the other electronic device, a location of the other device (para. 57: the 3D location of beacon 302, is the location of the other device since it is determined between the device and the other device ) . It would have been obvious to one skilled in the art at the time of filing to implement the acoustic processing using speakers and microphones as cited for the purpose of determining the device locations in the system of Mullins. Noting that the combined system, the location of the device is detected per the method taught by Wang, then the display content is based on a relative location of the electronic device relative to the other electronic device, the relative location based on the first patterned output audio (as received from one speaker), the second patterned audio output (as received from another speaker), and the predetermined amount (the distance between the speakers causes differences in the signals received by each microphone, as such the calculation of location and subsequent display content is based on the distanced between the speakers); As per claim 10, the non-transitory computer-readable medium of claim 9, wherein the instructions, when executed by the one or more processors, cause the one or more processors to output the audio by outputting first audio content with a first speaker of the electronic device and outputting second audio content with a second speaker of the electronic device (the output devices can comprise speakers per fig. 3 of Wang). As per claim 11, the non-transitory computer-readable medium of claim 10, wherein outputting the first audio content with the first speaker of the electronic device comprises outputting the first audio content with the first speaker during a first period of time, and wherein outputting the second audio content with the second speaker of the electronic device comprises outputting the second audio content with the second speaker of the electronic device at during a second period of time different from the first period of time (para 57 says that the embodiment of fig. 3 can use the methods herein, and para 60 discloses virtual time of arrival offsets, which define first and second periods of time for different speakers to output their audio content). As per claim 12, the non-transitory computer-readable medium of claim 10, wherein outputting the first audio content with the first speaker of the electronic device comprises outputting the first audio content with the first speaker during a first period of time, wherein outputting the second audio content with the second speaker of the electronic device comprises outputting the second audio content with the second speaker of the electronic device during the first period of time ((the speakers of fig. 3, Wang, can perform the tracking per the claim 1 rejection where all of the sounds are output over a period of time)), and wherein the first audio content is different from the second audio content (each speaker receives a separate audio signal which is separate audio content). As per claim 13, the non-transitory computer-readable medium of claim 9, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to provide a time synchronization signal for the audio from the electronic device to the other electronic device (the clock sync signal per the claim 7 rejection operates in synchronization with the received audio in order to present the image as shown in fig. 10 of Mullins). As per claim 14, the claim 1 rejection discloses a method, comprising: receiving, by an electronic device, an audio output from another electronic device (per claim 1 rejection); determining, by the electronic device and based on the audio output, a location of the other electronic device (per claim 1 rejection); and providing content to the other electronic device based on the location of the other electronic device (the electronic devices are coupled via a data network per fig. 1 of Mullins, and operate in synchronization to perform the functions cited above, via a wireless network, as such each device must continually provide signaling/content to the other in order to remain in synchronization and to perform the functions cited above and in the claim 1 rejection, where wireless communication is by definition based on location of the devices). storing information indicating predetermined audio pattern features for device location, [[an]] a patterned audio output from another electronic device; determining, by the electronic device [[and]] based on the patterned audio output and the stored information indicating the predetermined audio pattern features, (per the claim 1 rejection). As per claim 14, a method, comprising: receiving, by an electronic device storing first information indicating predetermined audio pattern features for device location and storing second information indicating a spatial arrangement of a first speaker and a second speaker at another electronic device, a first patterned audio output and a second patterned audio output from another the other electronic device (per the claim 1 rejection); determining, by the electronic device based on the first patterned audio output, [[and]] the second patterned audio output, the stored first information indicating the predetermined audio pattern features, and the stored second information indicating the spatial arrangement of the first speaker and the second speaker of the other electronic device, a location of the other electronic device (per claim 1 rejection); and providing content to the other electronic device based on the location of the other electronic device (displaying the display content, per the claim 1 rejection). As per claim 15, the method of claim 14, wherein providing the content to the other electronic device based on the location of the other electronic device comprises: identifying the other electronic device as a target for the content based on the location (para. 42: The position and the orientation of the AR device 114 may be used to identify real-world objects in a field of view of the AR device 114.) and based on a user gesture corresponding to the location (the user looking at the computer/gesture per para. 42: For example, a virtual object may be rendered and displayed in the display 204 when the sensors 202 indicate that the AR device 114 detects or is oriented towards a predefined real-world object (e.g., when the user 112 looks at the computer display 102 using the AR device 114)); and providing the content to the other electronic device identified as the target (the functions cited in para. 42 and in the above claim rejection require content/network signaling to be continually transferred between the two devices in order to maintain synchronization). As per claim 16, the method of claim 14, wherein providing the content to the other electronic device based on the location of the other electronic device comprises: providing display content to the other electronic device based on the location of the electronic device relative to the other electronic device (the content as per the claim 14 and 15 rejections, is display content as it is used to enable the display per fig. 10 of Mullins). As per claim 17, the method of claim 16, wherein the display content provided to the other electronic device is a first portion of the display content (a portion of the content of claim 16 rejection), and wherein the method further comprises displaying a second portion of the display content at the electronic device based on the location of the electronic device relative to the other electronic device (the content used to enable the communication between the devices to enable the image as shown in fig. 10 of Mullins). As per claim 18, the above rejections disclose a method, comprising: providing, from a first speaker of an electronic device, audio output/patterned audio output for location of the electronic device; providing, from a second speaker of the electronic device, a second patterned audio output for determining the location of the electronic device, the second speaker spatially separated by a predetermined amount from the first speaker (per the claim 9 rejection) displaying a first portion of display content at the electronic device (per fig. 10 of Mullins); and providing a second portion of the display content to another electronic device for display at the other electronic device based on the location of the electronic device (the display content provided to enable the physical monitor in fig. 10 of Mullins). As per claim 20, the method of claim 19, wherein the first audio content is the same as the second audio content (the audio is played through both speakers with content for tracking the location) wherein outputting the first audio content from the first speaker comprises outputting the first audio content from the first speaker during a first period of time, and wherein outputting the second audio content from the second speaker comprises outputting the second audio content from the second speaker during a second period of time different from the first period of time (per the claim 11 rejection). As per claim 21, the method of claim 20, wherein the first audio content is different from the second audio content/output (each signal out of each speaker is a different signal and hence different content, noting the different virtual offsets for each speaker as cited above), and wherein outputting the second audio content from the second speaker comprises outputting the second audio content from the second speaker concurrently with outputting the first audio content from the first speaker (the speakers are played back concurrently per para 60 Wang: to support the concurrent transmission of signals from multiple speakers (e.g., signals 410a-410d, 412a-412d, 414a-414d, and 416a-416d from speakers 404a-404d), virtual time-of-arrival offsets are introduced for each respective user device. ) noting the cited embodiment in fig. 3 is stated the methods herein per para 57, including that disclosed for fig. 4. As per claim 22, the method of claim 18, further comprising providing a time synchronization signal for the audio output from the electronic device to the other electronic device (per the claim 7 and 13 rejections). Response to Arguments The submitted arguments have been considered but are not persuasive. The examiner notes ‘information indicating the spatial arrangement’ as recited in claim 1 is not described in detail in the specification. In the most recent interview, applicant said that the distance between the speakers was the information, however that does not appear to be disclosed in the specification or recited claims. As per applicant’s argument that Wang is silent as to any information about relative spatial positions, the ‘relative spatial positions’ of the speakers does not appear to be recited in either of applicant’s claims or disclosed in applicant’s specification. As per applicant’s argument that Wang is silent to number of speakers, the examiner notes that does not appear in the claims. As per applicant’s characterization that Wang is primarily directed to determination of distance to a device based on detection of a single speaker, while not disclosing the details on the two speaker implementation. The examiner notes applicant’s own specification also does not disclose any details or even a single example of what the claimed ‘information about relative spatial positions’ is. The examiner notes one skilled in the art would read applicant’s specification as enabled, as well as that of prior art to Wang, in each requiring the use of shared/common coordinate system and/or indication of the relative positions of the disclosed speaker and microphone arrays in order to determine distances based on multiple received signals from multiple sources as explicitly stated. The disclosure and teachings of Wang clearly state multiple speakers sending in para 56 as part of a function to determine 3d position of a device as previously cited. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KRZYSTAN whose telephone number is 571-272-7498, and whose email address is alexander.krzystan@uspto.gov The examiner can usually be reached on m-f 7:30-4:00 est. If attempts to reach the examiner by telephone or email are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached on (571) 272-7547. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. /ALEXANDER KRZYSTAN/Primary Examiner, Art Unit 2653 Examiner Alexander Krzystan February 24, 2026
Read full office action

Prosecution Timeline

Jan 09, 2023
Application Filed
Sep 09, 2024
Response after Non-Final Action
Apr 02, 2025
Non-Final Rejection — §103
Jun 25, 2025
Examiner Interview Summary
Jun 25, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Response Filed
Jul 18, 2025
Final Rejection — §103
Sep 22, 2025
Response after Non-Final Action
Oct 21, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Nov 10, 2025
Non-Final Rejection — §103
Feb 12, 2026
Applicant Interview (Telephonic)
Feb 13, 2026
Response Filed
Feb 13, 2026
Examiner Interview Summary
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598440
RENDERING OF OCCLUDED AUDIO ELEMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593170
SWITCHING METHOD FOR AUDIO OUTPUT CHANNEL, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12573410
DECODER, ENCODER, AND METHOD FOR INFORMED LOUDNESS ESTIMATION IN OBJECT-BASED AUDIO CODING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12574675
Acoustic Device and Method
2y 5m to grant Granted Mar 10, 2026
Patent 12541554
TRANSCRIPT AGGREGATON FOR NON-LINEAR EDITORS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
88%
With Interview (+6.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 1121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month