Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,596

SYSTEM FOR OUTPUTTING AUDIO FOR A USER, AND A METHOD THEREOF

Final Rejection §102§103
Filed
Mar 08, 2024
Examiner
KRZYSTAN, ALEXANDER J
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
913 granted / 1121 resolved
+19.4% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
1159
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1121 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-9,11-20 is/are rejected under 35 U.S.C. 102a1 as being anticipated by Goo et al (US 20200366990 A1). As per claim 1, Goo discloses a system for outputting audio for a user, comprising: processing circuitry 110, fig. 2 configured to generate, based at least in part on an audio file (the audio data within the processor from the source data device per para 64), a plurality of audio channels (the separate signals to each speaker 140,150, when the device is playing audio); and three or more transducers (140,150,180) comprised within one or more audio output apparatuses wearable by the user (the headset/headphones are wearable), wherein the three or more transducers comprise at least one earphone/headphone (the nonadjacent speakers) and at least one bone conduction headphone (part of the adjacent speakers, para. 141), wherein the processing circuitry is further configured to dynamically allocate the plurality of audio channels among the three or more transducers in real time during operation (para. 223 the channel based eq based on head tracking is dynamic allocation in realtime during operation), wherein each earphone/headphone is operable to emit one or more sound signals in dependence upon a first subset of the audio channels (each speaker operates/emits audio based on respective audio channels carrying respective audio signals for each speaker, fig. 2), and wherein each bone conduction headphone is operable to vibrate responsive to a second subset of the audio channels ((each speaker operates/emits audio/vibrates based on respective audio channels carrying respective audio signals for each signal fig. 2). As per claim 2, the system of claim 1, wherein the processing circuitry is configured to generate the plurality of audio channels in dependence upon one or more audio rendering parameters associated with the audio file (any of the parameters in 223). As per claim 3, the system of claim 2, wherein the one or more audio rendering parameters comprise one or more selected from the list consisting of: One or more hrtfs, mixing eq parameters, channel indicators. (the correction parameters include eq per para 223). As per claim 4, the system of claim 1, wherein the processing circuitry is configured to synchronize the plurality of audio channels (the channels are synchronized per para 228, also the audio plays stereophonic sound per para 32 which requires synchronization between each audio channel, further the signals must all be synchronized in order to interface with the digital processor in fig. 2). As per claim 5, the system of claim 1, wherein at least one earphone/headphone is noise cancelling (the device uses noise cancelling para 76). As per claim 6, the system of claim 1, wherein each audio output apparatus comprises at least one earphone/headphone and at least one bone conduction headphone (per the claim 1 rejection there are at least one earphone and one bone conduction). As per claim 7, the system of claim 1, wherein the processing circuitry is comprised within one or more of the audio output apparatuses (the elements of fig. 2 including the processor are in the device/ part of the same apparatus such as shown in fig. 1). As per claim 8, the system of claim 7, comprising an entertainment device configured to transmit the audio data to the processing circuitry (mp3 player para 64). As per claim 9, the system of claim 1, wherein the processing circuitry is comprised within an entertainment device, wherein the entertainment device is configured to transmit the plurality of audio channels to the one or more audio output apparatuses (para. 64 source data). As per claim 11, the system of claim 1, wherein at least one of the audio output apparatuses is comprised within a head mounted display or a pair of glasses (glasses para 120). As per claim 12, a method of outputting audio for a user, comprising the steps of: generating, based at least in part on an audio file, a plurality of audio channels; distributing the plurality of audio channels among three or more transducers comprised within one or more audio output apparatuses wearable by the user, wherein the three or more transducers comprise at least one earphone/headphone and at least one bone conduction headphone; wherein the processing circuitry is further configured to dynamically allocate the plurality of audio channels among the three or more transducers in real time during operation (para. 223 the channel based eq based on head tracking is dynamic allocation in realtime during operation), emitting, in dependence upon a first subset of the audio channels, one or more sound signals from each earphone/headphone; and vibrating, responsive to a second subset of the audio channels, each bone conduction headphones(per claim 1 rejection). As per claim 13, the method of claim 12, wherein the generating step comprises generating the plurality of audio channels in dependence upon one or more audio rendering parameters associated with the audio file (per claim 2 rejection). As per claim 14, a non-transitory machine-readable storage medium which stores computer software which, when executed by a computer, causes the computer to perform a method of outputting audio for a user, comprising the steps of: generating, based at least in part on an audio file, a plurality of audio channels; distributing the plurality of audio channels among three or more transducers comprised within one or more audio output apparatuses wearable by the user, wherein the three or more transducers comprise at least one earphone/headphone and at least one bone conduction headphone; wherein the processing circuitry is further configured to dynamically allocate the plurality of audio channels among the three or more transducers in real time during operation (para. 223 the channel based eq based on head tracking is dynamic allocation in realtime during operation), emitting, in dependence upon a first subset of the audio channels, one or more sound signals from each earphone/headphone; and vibrating, responsive to a second subset of the audio channels, each bone conduction headphones (per claim 1 rejection). As per claim 15, the method of claim 12, wherein at least one of the audio output apparatuses is comprised within a head mounted display or a pair of glasses (glasses para 60). As per claim 16, the method of claim 12, wherein at least one earphone/headphone is noise cancelling (via the function in para 76). As per claim 17, the method of claim 12, wherein each audio output apparatus comprises at least one earphone/headphone and at least one bone conduction headphone (as cited in the above claim rejections as implemented in headphones, with a left and right apparatus each comprising multiple speakers/headphones). As per claim 18, the non-transitory machine-readable storage medium of claim 14, wherein at least one of the audio output apparatuses is comprised within a head mounted display or a pair of glasses (per claim 15 rejection). As per claim 19, the non-transitory machine-readable storage medium of claim 14, wherein at least one earphone/headphone is noise cancelling (per claim 16 rejection). As per claim 20, the non-transitory machine-readable storage medium of claim 14, wherein each audio output apparatus comprises at least one earphone/headphone and at least one bone conduction headphone (per claim 17 rejection). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Goo et al (US 20200366990 A1). As per claim 10, the system of claim 8, wherein the entertainment device is one selected from the list consisting of: Mobile phone (Goo: smartphone para 64). But Goo does not disclose the other items on the list. The examiner takes official notice it is well known that all of those devices could be used in a network to implement the cited functions. Response to Arguments The submitted arguments have been considered but are moot in view of the new grounds of rejection. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KRZYSTAN whose telephone number is 571-272-7498, and whose email address is alexander.krzystan@uspto.gov The examiner can usually be reached on m-f 7:30-4:00 est. If attempts to reach the examiner by telephone or email are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached on (571) 272-7547. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. /ALEXANDER KRZYSTAN/Primary Examiner, Art Unit 2653 Examiner Alexander Krzystan April 4, 2026
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Nov 13, 2025
Non-Final Rejection — §102, §103
Jan 26, 2026
Interview Requested
Feb 18, 2026
Examiner Interview Summary
Feb 18, 2026
Applicant Interview (Telephonic)
Feb 20, 2026
Response Filed
Apr 04, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598440
RENDERING OF OCCLUDED AUDIO ELEMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593170
SWITCHING METHOD FOR AUDIO OUTPUT CHANNEL, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12573410
DECODER, ENCODER, AND METHOD FOR INFORMED LOUDNESS ESTIMATION IN OBJECT-BASED AUDIO CODING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12574675
Acoustic Device and Method
2y 5m to grant Granted Mar 10, 2026
Patent 12541554
TRANSCRIPT AGGREGATON FOR NON-LINEAR EDITORS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
88%
With Interview (+6.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 1121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month