Prosecution Insights
Last updated: April 18, 2026
Application No. 18/650,220

Audio System with Personal Zones

Non-Final OA §102§103
Filed
Apr 30, 2024
Examiner
SELLERS, DANIEL R
Art Unit
2694
Tech Center
2600 — Communications
Assignee
BOSE CORPORATION
OA Round
2 (Non-Final)
67%
Grant Probability
Favorable
2-3
OA Rounds
3y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
401 granted / 595 resolved
+5.4% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
28 currently pending
Career history
623
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
63.6%
+23.6% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-3, 6, 8, 10, 12-15, 17, 19, 21-23, 26-27, 31, 33, 35-36, 38, 43, 45-46, 48, 51, and 54 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new reference, Oswald et al. (US 2022/0141611 A1) is cited in the new grounds of rejection, where it is acknowledged that Oswald et al. (US 2023/0403529 A1) is disqualified as a reference against the present application under 35 USC 102(a)(1) and 102(a)(2). Claim Rejections - 35 USC § 102 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-3, 8, 10, 13-15, 17, 19, 22-23, 26-27, 33, 35, 38, 45, 48, 51, and 54 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Oswald et al. (US 2022/0141611 A1 and hereafter Oswald) Regarding claim 1, Oswald anticipates: “A system, comprising: a first set of near-field (NF) speakers” by teaching a binaural device with speakers disposed in a headrest (see Oswald, ¶ 0043 and 0074, figures 1A and 5, units 110, 118L, and 118R); “a set of sensors for detecting at least one of a position or an orientation of a first user in a first listening location” by teaching headtracking devices, such as multiple 2D cameras or a combination of various sensors (see Oswald, ¶ 0075 and 0087, and figure 5, units 506 and 508); and “a controller coupled with the first set of NF speakers and the set of sensors” by teaching a controller that receives the outputs from the headtracking devices and generates a binaural signal for the first set of NF speakers (see Oswald, ¶ 0077-0078 and figure 5, units 504, 506, 110, and b1), “and configured to: adjust an audio output at the first set of NF speakers, wherein the controller is configured to maintain spatialization of the audio output to the first user based on detecting at least one of a position change or an orientation change of the first user” by teaching that the controller generates a binaural signal based on the position signal received from the headtracking device, where the binaural signal uses a plurality of head-related transfer functions (HRTFs) to adjust the binaural signal for a listener to perceive a sound emanating from a specific virtual spatial point (see Oswald, ¶ 0077-0079 and figure 5, units 504, 506, 110, 114L-114R, h1, e1, b1, and SP1), and “wherein the controller is configured to control the spatialized audio output to the first user with the first set of NF speakers with consideration of isolation to a second listening location” by teaching that the first listening position receives different audio from a second listening position and that the two or more listening positions are isolated such that the listening positions hear different spatialized audio (see Oswald, ¶ 0035, 0037, 0061, 0064-0065, 0067, and 0074, figures 1A and 5, units P1 and P2). Regarding claim 2, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein maintaining spatialization of the audio output includes maintaining independent control of an acoustic signal received at a left ear of the first user and an acoustic signal received at a right ear of the first user” binaural signal uses a plurality of HRTFs to adjust the binaural signal for each ear, so that a listener to perceive a sound emanating from a specific virtual spatial point (see Oswald, ¶ 0077-0079 and figure 5, units 504, 506, 110, 114L-114R, h1, e1, b1, and SP1). Regarding claim 3, see the preceding rejection with respect to claim 2 above. Oswald anticipates the “system of claim 2, wherein the audio output is spatialized such that the acoustic signals received at the left ear of the first user and the right ear of the first user create a perceived acoustic source from a virtual location” by teaching the left and right HRTFs used to control the virtual spatial point (Oswald, ¶ 0077-0079, and figure 5, units 504, 110, 114L-114R, 0118L-118R, b1, and SP1), “wherein the virtual location is not associated with a location of the first set of NF speakers, or is associated with a location of one of the first set of NF speakers” by teaching the virtual spatial point that is a location other than the actual location of the speakers (see Oswald, ¶ 0079 and figure 5, units 102, 118L-118R, 120L-120R, and SP1), and “wherein the virtual location is perceived as originating from a location that is separated by at least 5 degrees from any speaker used in producing the audio output” by teaching that the plural HRTFs simulate sounds specific to various locations around the user with respect to an azimuth angle and elevation, and illustrating a virtual location that is more than 5 degrees separated from the actual loudspeakers (see Oswald, ¶ 0079 and figure 5, units 102, 118L-118R, and SP1). Regarding claim 8, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the audio output produces a sound stage that is perceived as being in front of the first user, wherein the sound stage perceived as being in front of the user is perceived as being located approximately forward of the first user's ears” by teaching virtual sound sources that are perceived in front of the user, such as simulated left, right, and/or center channels in front of the user (see Oswald, ¶ 0079-0080 and figure 5, unit SP1). Regarding claim 10, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the audio output includes full bandwidth audio output, and wherein a set of additional speakers outside of the near field provide a low frequency portion of the audio output” by teaching the binaural output device for the upper frequency range and the perimeter speakers for the bass, or lower frequency range (see Oswald, ¶ 0037-0038, 0043, and 0074 and figure 5, units 102 and 110). Regarding claim 13, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the controller includes a dynamic array module configured to control a tradeoff between isolation of the audio output to the first user and audio performance of the audio output to the first user” by teaching that the isolation is unnecessary when it is detected that only one person is detected (see Oswald, ¶ 0073). Regarding claim 14, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the first listening location includes a first seating location and wherein the second listening location includes one of a second seating location or a standing location” by teaching the vehicle audio system with at least two listening positions corresponding to different seats in the vehicle (see Oswald, ¶ 0038 and figures 1A-1B and 5, units 106, 108, P1, and P2). Regarding claim 15, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the first set of NF speakers includes at least two NF speaker elements” by teaching left and right headrest speakers (see Oswald, ¶ 0037 and 0043, and figures 1A-1B and 5, units 110 and 114L-114R). Regarding claim 17, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the set of sensors includes one or both of: i) at least two sensors, or ii) at least two optical sensors” by teaching headtracking devices, such as multiple 2D cameras or a combination of various sensors (see Oswald, ¶ 0075 and 0087, and figure 5, units 506 and 508). Regarding claim 19, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the controller is configured to provide the audio output from a first audio source and provide a second audio output to a second user at the second listening location from a second audio source, wherein the controller is further configured to maintain isolation of the second audio output during a position and/or orientation change of the second user” teaching the vehicle audio system with at least two listening positions corresponding to different seats in the vehicle (see Oswald, ¶ 0038 and figures 1A-1B and 5, units 106, 108, P1, and P2), and by teaching that the second listening position receives different audio from the first listening position and that the two or more listening positions are isolated such that the listening positions hear different spatialized audio (see Oswald, ¶ 0035, 0037-0038, 0061, 0064-0065, 0067, and 0074, figures 1A and 5, units 106, 108, P1, and P2). Regarding claim 22, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the first user is located in a first listening location, and wherein the set of sensors are further configured to detect a position of a second user in the second listening location” by teaching two or more listening positions, where there is a second set of sensors, or headtracking devices, such as multiple 2D cameras or a combination of various sensors, to detect the second user in the second listening position (see Oswald, ¶ 0075 and 0087, and figure 5, units 504, 506, 508, P1, and P2). Regarding claim 23, see the preceding rejection with respect to claim 1 above. Oswald anticipates “A vehicle comprising the system of claim 1, wherein the first listening location includes a first seating location in the vehicle, wherein the first seating location includes a vehicle seat having a headrest portion, wherein a portion of the first set of NF speakers are located in the headrest portion” by teaching a binaural device with speakers disposed in a headrest (see Oswald, ¶ 0043 and 0074, figures 1A and 5, units P1, 110, 118L, and 118R), and “wherein the second listening location includes a second seating location for a second user in the vehicle” by teaching a second listening position corresponding to a different seat in the vehicle (see Oswald, ¶ 0038 and figures 1A-1B and 5, units 108 and P2). Regarding claim 26, Oswald anticipates: “A vehicle audio system” (see Oswald, ¶ 0074 and figure 5), “comprising: a first set of near-field (NF) speakers for providing a first audio output to a first listening location” by teaching a binaural device with speakers disposed in a headrest for first audio output to a first seating position (see Oswald, ¶ 0043 and 0074, figures 1A and 5, units 106, 110, 114L-114R, 118L-118R, and P1); “a set of sensors for detecting at least one of a position or an orientation of a first user in the first listening location” by teaching headtracking devices, such as multiple 2D cameras or a combination of various sensors, for a first seating position (see Oswald, ¶ 0075 and 0087, and figure 5, units 506 and P1); and “a controller coupled with the first set of NF speakers and the set of sensors and configured to adjust an audio output at the first set of NF speakers” by teaching a controller that receives the outputs from the headtracking devices and generates a binaural signal for the first set of NF speakers (see Oswald, ¶ 0077-0078 and figure 5, units 504, 506, 110, h1, e1, and b1), “wherein the controller is configured to maintain spatialization of the audio output the first user based on detecting at least one of a position change or an orientation change of the first user” by teaching that the controller generates a binaural signal based on the position signal received from the headtracking device, where the binaural signal uses a plurality of HRTFs to adjust the binaural signal for a listener to perceive a sound emanating from a specific virtual spatial point (see Oswald, ¶ 0077-0079 and figure 5, units 504, 506, 110, 114L-114R, h1, e1, b1, P1, and SP1), and “wherein the controller is configured to control the spatialized audio output to the first user with the first set of NF speakers with consideration of isolation to a second listening location” by teaching that the first listening position receives different audio from a second listening position and that the two or more listening positions are isolated such that the listening positions hear different spatialized audio (see Oswald, ¶ 0035, 0037, 0061, 0064-0065, 0067, and 0074, figures 1A and 5, units P1 and P2). Regarding claim 27, see the preceding rejection with respect to claim 26 above. Oswald anticipates the “vehicle audio system of claim 26, wherein maintaining spatialization of the audio output includes maintaining independent control of an acoustic signal received at a left ear of the first user and an acoustic signal received at a right ear of the first user” by teaching the use of left and right HRTFs for audio output from the left and right headrest speakers (see Oswald, ¶ 0043, 0074, and 0077-0079, and figure 5, units 504, 506, 110, 114L-114R, 0118L-118R, h1, e1, b1, and SP1), “wherein the audio output is spatialized such that the acoustic signals received at the left ear of the first user and the right ear of the first user create a perceived acoustic source from a virtual location” by teaching the left and right HRTFs used to control the virtual spatial point (Oswald, ¶ 0077-0079, and figure 5, units 504, 110, 114L-114R, 0118L-118R, b1, and SP1), “wherein the virtual location is not associated with a location of the first set of NF speakers, and wherein the virtual location is perceived as originating from a location that is separated by at least 5 degrees from any speaker used in producing the audio output” by teaching the virtual spatial point that is a location other than the actual location of the speakers, and by teaching that the plural HRTFs simulate sounds specific to various locations around the user with respect to an azimuth angle and elevation, and illustrating a virtual location that is more than 5 degrees separated from the actual loudspeakers (see Oswald, ¶ 0079 and figure 5, units 102, 118L-118R, 120L-120R, and SP1). Regarding claim 33, see the preceding rejection with respect to claim 26 above. Oswald anticipates the “vehicle audio system of claim 26, wherein the audio output produces a sound stage that is perceived as being in front of the first user, wherein the sound stage perceived as being in front of the user is perceived as being located approximately forward of the first user's ears” by teaching virtual sound sources that are perceived in front of the user, such as simulated left, right, and/or center channels in front of the user (see Oswald, ¶ 0079-0080 and figure 5, unit SP1). Regarding claim 35, see the preceding rejection with respect to claim 26 above. Oswald anticipates the “vehicle audio system of claim 26, wherein the audio output includes full bandwidth audio output” by teaching the binaural output device for the upper frequency range and the perimeter speakers for the bass, or lower frequency range (see Oswald, ¶ 0037-0038, 0043, and 0074 and figure 5, units 102 and 110). Regarding claim 38, see the preceding rejection with respect to claim 26 above. Oswald anticipates the “vehicle audio system of claim 26, wherein the controller includes a dynamic array module configured to control a tradeoff between: isolation of the audio output to the first user, and audio performance of the audio output to the first user” by teaching that the isolation is unnecessary when it is detected that only one person is detected (see Oswald, ¶ 0073). Regarding claim 45, Oswald anticipates: “An audio system comprising: a first set of near-field (NF) speakers proximate a first listening location” by teaching a first binaural device with speakers disposed in a headrest at a first listening position (see Oswald, ¶ 0043 and 0074, figures 1A and 5, units 106, 110, 114L-114R, 118L-118R, and P1); “a second set of NF speakers proximate a second listening location” by teaching a second binaural device with speakers disposed in a headrest at a second listening position (see Oswald, ¶ 0043 and 0074, figures 1A and 5, units 108, 112, 116L-116R, 120L-120R, and P2); “a set of sensors for detecting at least one of a position or an orientation of at least one user in the first listening location or the second listening location” by teaching headtracking devices, such as multiple 2D cameras or a combination of various sensors, for both seating positions (see Oswald, ¶ 0075 and 0087, and figure 5, units 506, 508, P1, and P2); and “a controller coupled with the first set of NF speakers, the second set of NF speakers, and the set of position sensors” by teaching a controller that receives the outputs from the headtracking devices and generates binaural signals for the first and second set of NF speakers (see Oswald, ¶ 0077-0078 and figure 5, units 504, 506, 508, 110, 112, h1-h2, e1-e2, and b1-b2), “wherein the controller is configured to control a spatialized audio output at one or both sets of the NF speakers, wherein the spatialized audio output is approximately consistently isolated during movement by a first user at the first listening location with consideration of isolation to the second listening location” by teaching that the controller generates a binaural signal based on the position signal received from the headtracking device, where the binaural signal uses a plurality of HRTFs to adjust the binaural signal for a listener to perceive a sound emanating from a specific virtual spatial point (see Oswald, ¶ 0077-0079 and figure 5, units 504, 506, 508, 110, 112, 114L-114R, 116L-116R, h1-h2, e1-e2, b1-b2, P1-P2, and SP1-SP2) and the first listening position receives different audio from a second listening position and that the two or more listening positions are isolated such that the listening positions hear different spatialized audio (see Oswald, ¶ 0035, 0037, 0061, 0064-0065, 0067, and 0074, figures 1A and 5, units P1 and P2). Regarding claim 48, see the preceding rejection with respect to claim 45 above. Oswald anticipates the “audio system of claim 45, wherein maintaining spatialization of the audio output includes maintaining independent control of an acoustic signal received at a left ear of the user in the first listening location and an acoustic signal received at a right ear of the user in the first listening location” by teaching the use of left and right HRTFs for audio output from the left and right headrest speakers (see Oswald, ¶ 0043, 0074, and 0077-0079, and figure 5, units 504, 506, 110, 114L-114R, 0118L-118R, h1, e1, b1, and SP1), “wherein the audio output is spatialized such that the acoustic signals received at the left ear of the user and the right ear of the user create a perceived acoustic source from a virtual location” by teaching the left and right HRTFs used to control the virtual spatial point (Oswald, ¶ 0077-0079, and figure 5, units 504, 110, 114L-114R, 0118L-118R, b1, and SP1), and “wherein the virtual location is not associated with a location of the first set of NF speakers” by teaching the virtual spatial point that is a location other than the actual location of the speakers (see Oswald, ¶ 0079 and figure 5, units 102, 118L-118R, 120L-120R, and SP1). Regarding claim 51, see the preceding rejection with respect to claim 45 above. Oswald anticipates “A vehicle comprising the audio system of claim 45, wherein the first listening location includes a first seat in the vehicle and wherein the second listening location includes a second seat in the vehicle” by teaching the vehicle audio system with at least two listening positions corresponding to different seats in the vehicle (see Oswald, ¶ 0038 and figures 1A-1B and 5, units 106, 108, P1, and P2). Regarding claim 54, see the preceding rejection with respect to claim 45 above. Oswald anticipates “An entertainment system or a gaming system comprising the audio system of claim 45” by teaching the vehicle audio system, where Oswald teaches entertainment such as music playback (see Oswald, ¶ 0064). Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 6, 31, and 46 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oswald as applied to claims 1, 26, and 45 above, and further in view of Cai et al. (US 2022/0353614 A1, previously cited and hereafter Cai). Regarding claim 6, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the audio output is approximately consistently isolated during the position and/or orientation change of the first user” by teaching that the upper frequency range is isolated from changing position and orientation change of the first user through the use of the near-field speakers and the headtracking system (see Oswald, ¶ 0035, 0037, 0043, 0061, and 0077-0079, and figure 5, units 504, 506, 110, and 118L-118R). Oswald also teaches that the lower frequency range is isolated by at least 3 dB between listening positions using the perimeter speakers of the vehicle (see Oswald, ¶ 0038-0040 and figure 5, units 102, P1, and P2). However, Oswald does not clearly teach the isolation “by 5 decibels (dB) or more across the listening bandwidth during the position and/or orientation change of the first user”. Cai teaches a loudspeaker system layout for generating low frequency audio outputs in individual sound zones (see Cai, abstract). Herein, Cai teaches cross talk cancellation to deliver full-bandwidth separation in acoustic responses between different zones in a vehicle cabin (see Cai, ¶ 0023), and/or teaches placing multiple proximity woofers closer to each listening zone (see Cai, ¶ 0029, figures 2-3, ¶ 0036-0037, and figures 8-9B). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Oswald with the teachings of Cai for the purpose of improving the acoustic isolation across the listening, or full, bandwidth (see Oswald, ¶ 0038-0040 in view of Cai, ¶ 0023, 0038, 0041, 0043, and figures 10-12). Therefore, the combination of Oswald and Cai makes obvious the “system of claim 1, wherein the audio output is approximately consistently isolated during the position and/or orientation change of the first user, wherein the consistently isolated audio output is characterized by a difference in perception of the audio output at the second listening location, relative to a perception of the audio output at the first listening location, by 5 decibels (dB) or more across the listening bandwidth during the position and/or orientation change of the first user” by making it obvious to use crosstalk cancellation for the low frequency and high frequency bands to isolate the audio output based on the user’s position and/or orientation changing during output (see Oswald, ¶ 0035, 0037, 0043, 0061, and 0077-0079, and figure 5, units 504, 506, 110, and 118L-118R, where the upper frequency range is isolated from changing position and orientation change of the first user through the use of the near-field speakers, the headtracking system, and crosstalk cancellation, and see Cai, ¶ 0023, 0029, 0038, 0041, and 0043 figures 2-3 and 10-12). Regarding claim 31, see the preceding rejection with respect to claims 6 and 26 above. Oswald anticipates the “vehicle audio system of claim 26”, and for the same reasons as stated above with respect to claim 6, the combination of Oswald and Cai makes obvious: “The vehicle audio system of claim 26, wherein the audio output is approximately consistently isolated during the position and/or orientation change of the first user, wherein the consistently isolated audio output is characterized by a difference in perception of the audio output at the second listening location, relative to a perception of the audio output at the first listening location, by 5 decibels (dB) or more across the listening bandwidth during the position and/or orientation change of the first user” by making it obvious to use crosstalk cancellation for the low frequency and high frequency bands to isolate the audio output based on the user’s position and/or orientation changing during output (see Oswald, ¶ 0035, 0037, 0043, 0061, and 0077-0079, and figure 5, units 504, 506, 110, and 118L-118R, where the upper frequency range is isolated from changing position and orientation change of the first user through the use of the near-field speakers, the headtracking system, and crosstalk cancellation, and see Cai, ¶ 0023, 0029, 0038, 0041, and 0043 figures 2-3 and 10-12). Regarding claim 46, see the preceding rejection with respect to claims 6 and 45 above. Oswald anticipates the “audio system of claim 45”, and for the same reasons as stated above with respect to claim 6, the combination of Oswald and Cai makes obvious: “The audio system of claim 45, wherein the consistently isolated audio output is characterized by a difference in perception of the audio output at the second listening location, relative to a perception of the audio output at the first listening location, by 5 decibels (dB) or more across the listening bandwidth during the position and/or orientation change of the user in the first listening location” by making it obvious to use crosstalk cancellation for the low frequency and high frequency bands to isolate the audio output based on the user’s position and/or orientation changing during output (see Oswald, ¶ 0035, 0037, 0043, 0061, and 0077-0079, and figure 5, units 504, 506, 110, and 118L-118R, where the upper frequency range is isolated from changing position and orientation change of the first user through the use of the near-field speakers, the headtracking system, and crosstalk cancellation, and see Cai, ¶ 0023, 0029, 0038, 0041, and 0043 figures 2-3 and 10-12), and “wherein the controller is configured to maintain spatialization of the audio output based on detecting at least one of a position change or an orientation change of the user in the first listening location” by teaching that the upper frequency range is isolated from changing position and orientation change of the first user through the use of the near-field speakers and the headtracking system (see Oswald, ¶ 0035, 0037, 0043, 0061, and 0077-0079, and figure 5, units 504, 506, 110, and 118L-118R, and see Cai, ¶ 0023 and 0029 and figures 2-3). Claim(s) 12 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oswald as applied to claims 1 and 26 above. Regarding claim 12, see the preceding rejection with respect to claim 1 above. Oswald anticipates the “system of claim 1, wherein the position and/or orientation change of the first user includes at least one of: head movement or rotation, body movement, or seat movement …” because Oswald teaches the headtracking device is used to update azimuth angles for selecting appropriate HRTFs for localizing the virtual sound points (see Oswald, ¶ 0079-0080). Oswald makes obvious the additional features “wherein the controller maintains the spatialization of the audio output while the head rotation of the first user deviates from center by up to approximately 40 degrees”, Oswald makes obvious this feature because the first user is sitting in a driver’s seat within a vehicle, such that it is obvious that the first user will be using the system while turning their head to look left and right when operating the vehicle. Regarding claim 36, see the preceding rejection with respect to claim 26 above. Oswald anticipates the “vehicle audio system of claim 26, wherein a set of additional speakers outside of the near field provide a low frequency portion of the audio output, wherein the position and/or orientation change of the first user includes at least one of: head movement or rotation, body movement, or seat movement, and wherein the controller maintains the spatialization of the audio output while the head rotation of the first user deviates from center by up to approximately 40 degrees” because Oswald teaches the headtracking device is used to update azimuth angles for selecting appropriate HRTFs for localizing the virtual sound points (see Oswald, ¶ 0079-0080), and makes obvious this feature because the first user is sitting in a driver’s seat within a vehicle, such that it is obvious that the first user will be using the system while turning their head to look left and right when operating the vehicle. Allowable Subject Matter Claims 21 and 43 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art of record does not appear to teach or reasonably suggest the features “to control a stability of the audio output such that a perceived acoustic source from a virtual location is fixed relative to the system throughout the position change and/or orientation change of the user”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniel R Sellers/Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §102, §103
Dec 01, 2025
Interview Requested
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Apr 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604151
COMPUTER SYSTEM FOR PROCESSING AUDIO CONTENT AND METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12562144
ACOUSTIC ECHO CANCELLATION UNIT
2y 5m to grant Granted Feb 24, 2026
Patent 12556879
SHARED POINT OF VIEW
2y 5m to grant Granted Feb 17, 2026
Patent 12556190
Startup Calibration and Digital Temperature Compensation for an Open-Loop VCO Based ADC Architecture
2y 5m to grant Granted Feb 17, 2026
Patent 12532139
AUDIO SIGNAL PROCESSING METHOD AND AUDIO SIGNAL PROCESSING APPARATUS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
67%
Grant Probability
84%
With Interview (+16.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month