Prosecution Insights
Last updated: April 19, 2026
Application No. 18/395,901

DISTRIBUTION SYSTEM, SOUND OUTPUTTING METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Non-Final OA §102
Filed
Dec 26, 2023
Examiner
HUBER, PAUL W
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Yamaha Corporation
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
929 granted / 1091 resolved
+23.2% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
36 currently pending
Career history
1127
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1091 resolved cases

Office Action

§102
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7-9, and 13-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Silverstein (US 12,108,121, as supported by U.S. Provisional application No. 63/195,577). Regarding claims 1, 7, and 13, Silverstein discloses a distribution system (see figs. 1, 3, 5, and 6, for example), a sound outputting method performed by a computer used in a distribution system, and a non-transitory computer-readable recording medium storing a program that, when executed by at least one computer used in a distribution system, cause the at least one computer to perform the method (see fig. 7, and col. 19-28), comprising: receiving a first sound signal (e.g., Guitar Track) and a second sound signal (e.g., Drums Track) that are related to a performance sound (e.g., “Facebook Live concerts”) to be distributed (see col. 9, lines 55-61, regarding “the component audio files … can be streamed from an online service. For example, audio player 110 can be a Bluetooth receiver receiving streaming data from an associated mobile phone, and can process the streaming audio files in real time. The audio files can be streamed recordings (for example, from Spotify) or live streams (for example, from Facebook Live concerts)”); receive meta-data indicating a type of the first sound signal and a type of the second sound signal (see col. 7, lines 32-41, regarding “time series data represents important temporal characteristics of the recording, such as whether a drummer is playing ahead of, on, or behind the beat. Both the audio data and time series data can be modified during execution by metadata included with the component audio file, … For example, metadata instructions could involve playing a WAV guitar track in the style of a certain guitarist, which can change the timbre, effects, rhythmic placement, or other elements of music or sound in the audio file”); receive sound environment data indicating a sound characteristic of a sound appliance (see col. 7, lines 44-47, regarding “audio player 110 can determine audio profiles of the plurality of the component audio files and frequency profiles of the audio generators… A frequency profile of sound and audio associated with an audio generator can contain information about which frequencies the audio generator can generate, as well as information about amplitude and distribution of frequencies”); and based on a combination of the type of the first sound signal and the sound characteristic or a combination of the type of the second sound signal and the sound characteristic, control the first sound signal or the second sound signal to be output to the sound appliance (see col. 8, lines 5-10, regarding “by creating a routing scheme that takes into account the frequency profiles of the audio generators and the audio profiles of the component audio files, the sound coming out of the audio generators will better approximate whatever sound the routing scheme is trying to create, such as the sound of a live band”). Regarding claims 2, 8, and 14, the meta-data comprises: first meta-data assigned to the first sound signal and indicating whether the first sound signal is a main sound (e.g., high/mid frequency sounds such as vocals) or a secondary sound (e.g., bass sounds such as drums) in the performance; and second meta-data assigned to the second sound signal and indicating whether the second sound signal is the main sound or the secondary sound in the performance sound. The sound appliance comprises a first speaker (e.g., mid and/or high frequency range speaker) and a second speaker (e.g., low range speaker such as a subwoofer) that are provided at a viewing place to which the performance sound is to be distributed. The sound environment data comprise information indicating a sound range of the first speaker and information indicating a sound range of the second speaker. A device control circuit is configured to refer to the meta data to: cause one of the first speaker and the second speaker that is wider in the sound range to output the first sound signal when the first meta-data indicates that the fist sound signal is the main sound; and cause the other of the first speaker and the second speaker to output the second sound signal when the second meta data indicates that the second sound signal is the secondary sound. See col. 9, lines 2-26, regarding “the output can be ported in accordance with the determined routing scheme to a surround sound system with different audio generators possessing different audio capabilities and frequency profiles. The routing scheme can be modified by a routing scheme modifier contained with the audio player 110, audio interface 120 or another device. These modifications can be automated, manual or a combination of both. For example, consider a routing scheme originally routing a full orchestra to 2 stereo speakers, but subsequently being connected to a surround sound system. The routing scheme can re-route the various component audio files (violin, viola, trumpet, trombone, clarinet, flute timpani, etc.) to the new surround system based on the audio profiles of the component audio files and of the surround sound system. The routing scheme can dynamically and in real time be modified to account for different instruments in the mix of component audio files, functionality of audio generators, or other factors.” Note that when routing a full orchestra to 2 stereo speakers and a subwoofer speaker, the mid/high frequency audio tracks will be routed to each of the 2 stereo speakers and the low frequency audio tracks will be routed to the subwoofer speaker in accordance with the teachings of the invention of Silverstein. See also, col. 12, lines 28-31, regarding “if only a guitar amplifier and an all-purpose speaker are available, the routing scheme can route the guitar and bass tracks to the guitar amplifier and the drums and vocal tracks to the all-purpose speaker”. Regarding claims 3, 9, and 15, the meta-data comprises information indicating a viewer-registered priority of the type of the first sound signal and the type of the second sound signal. The sound appliance comprises a first speaker provided at a viewing place to which the performance sound is to be distributed. A device control circuit is further configured to cause the first speaker to output the first sound signal or the second sound signal based on the view-registered priority. See col. 14, lines 19-29, regarding “metadata associated with the component audio files can be used to alter them. For example, imagine the guitar solo in Led Zeppelin’s ‘Stairway To Heaven,’ but user 170 wants to hear what it would sound like live as played by David Gilmore of Pink Floyd. Metadata associated with the guitar solo file can state ‘play in the style of David Gilmore,’ which can be interpreted by audio player 110 and the guitar solo file can be altered accordingly. Various musical aspects, such as guitar tone, picking intensity, vibrato, tremolo, pitch, duration, timing, and even note selection can be altered. These alterations can be made by user 170…” Claims 1, 7, and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fincham (US 2015/0042881). Fincham discloses a distribution system (see fig. 10, for example), a sound outputting method performed by a computer used in a distribution system, and a non-transitory computer-readable recording medium storing a program that, when executed by at least one computer used in a distribution system, cause the at least one computer to perform the method (see para. 0074), comprising: receiving a first sound signal (e.g., rock-type of music) and a second sound signal (e.g., country-type of music) that are related to a performance sound (e.g., music broadcast signal) to be distributed (see para. 0027, regarding “the input source device 105 … may receive a broadcast or other ‘real-time’ signal such as a cable or satellite TV signal, … or a live feed such as from a microphone, camera, musical instrument, and the like”; see also, para. 0059, regarding “different frequency response profiles may be generated for different types of music (e.g., rock, pop, classical, hip-hop, country, etc.), and a modification profile may be created for each type of music to enhance its quality”); receive meta-data indicating a type of the first sound signal and a type of the second sound signal (see para. 0061, regarding “the audio (or video) content may also include embedded meta-data to facilitate identification of the … type of data (e.g., … type of music, etc.)”); receive sound environment data indicating a sound characteristic of a sound appliance (see para. 0075, regarding “a streaming audio or video transmission, for example, may be sent from a broadcast station to the mobile device 1000 that is particularly tailored to the individual properties of the mobile device 1000, improving the quality of the material upon playback. The type of information that may be conveyed to the broadcast control center or station may include, for instance, information about the type or brand of earphones, … the type of audio speaker 1046, the audio profile characteristics of the unit, and so on”); and based on a combination of the type of the first sound signal and the sound characteristic or a combination of the type of the second sound signal and the sound characteristic, control the first sound signal or the second sound signal to be output to the sound appliance (see para. 0059, regarding “different frequency response profiles may be generated for different types of music …, and a modification profile may be created for each type of music to enhance its quality…”; see also, para. 0048, regarding “selection logic 425 upon recognizing a particular brand of input device and particular brand of output device, may select a modification profile that enhances the sound for that particular combination of devices. …The selection logic may select a certain modification profile for one type of earphone headset, but a different modification profile for a different type of earphone headset with different characteristics”). Claims 4-6, 10-12, and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL W HUBER whose telephone number is (571)272-7588. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen, can be reached at telephone number 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. /PAUL W HUBER/Primary Examiner, Art Unit 2691 pwh October 10, 2025
Read full office action

Prosecution Timeline

Dec 26, 2023
Application Filed
Oct 10, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604150
Method and System For Spatial Audio Processing Using Multiple Orders Of Ambisonics
2y 5m to grant Granted Apr 14, 2026
Patent 12593189
METHOD OF GENERATING VIBRATION FEEDBACK SIGNAL, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593159
MAGNETIC EARPHONES HOLDER
2y 5m to grant Granted Mar 31, 2026
Patent 12587803
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587804
LOCATION-AWARE NEURAL AUDIO PROCESSING IN CONTENT GENERATION SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
95%
With Interview (+9.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1091 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month