Prosecution Insights
Last updated: April 19, 2026
Application No. 18/678,022

PORTABLE SPEAKER WITH AUDIO MONITOR

Non-Final OA §103
Filed
May 30, 2024
Examiner
LEE, SHIN
Art Unit
2695
Tech Center
2600 — Communications
Assignee
BOSE CORPORATION
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
4 currently pending
Career history
4
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements filed on 05/30/2024 and 10/30/2024 are in compliance with 37 CFR 1.97, and have been considered by Examiner. Specification The title of the invention is not descriptive. Applicant is suggested to amend to a more descriptive title, such as “PORTABLE SPEAKER WITH AUDIO MONITOR AND OUTPUT NETWORK LINK”. There is no description in the specification about element C’ and C” shown in FIG. 1A, element C1 shown in FIG. 2A(1) and FIG. 2A(2), element C2 shown in FIG. 2B(1) and FIG. 2B(2), and element C3 shown in FIG. 2C(1) and FIG. 2C(2). Applicant is suggested to amend the specification to include a corresponding description of these elements. Claim Objections Claims 10 and 12-15 are objected to because of the following informalities: In line 3 of claim 10, the term “at the interface” is recited. Applicant is suggested to amend “at the interface” to “at the user interface” for clarity and consistency with the antecedent basis established in line 1, “a user interface”. In line 1 of claim 12 and line 2-3 of claim 13, the term “the second set of audio signals” is recited. In view of line 12 of claim 1, Applicant clearly intends to mean “the second set of audio output signals”. It is better to be clear and consistent to distinguish between input and out signals, and amend to “the second set of audio output signals”. Dependent claims 14 and 15 inherit the same problem from claims 12 and 13 and they are objected to for the same reason as well. Claim 9 is objected to as being dependent upon a rejected base claim 1, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 10 and 13-15 are further objected to as being dependent upon a rejected base claim 1, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcome the issues as set forth in the claim objection above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 11, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Silfvast et al. (US Pub. No. 2014/0064519 A1) in view of Bose (“S1 Pro Multi-position PA System Owner’s Guide”) Regarding claim 1, Silfvast teaches an audio system comprising: at least one electro-acoustic transducer for providing an acoustic output (speakers are disclosed in Element 338 and 340 in Fig. 3 for providing an acoustic output; also see [0039]: “…loudspeakers 338 and 340 may be implemented as separate DSSN mixer end nodes, each producing a single mix output to drive its local amplifier and transducer elements, …”); an audio input for receiving one or more audio input signals (source audio signals are audio input signals, see “an audio input module for receiving one or more source audio signals”, pg. 12, claim 1; also see Element 324, Fig.3 for an audio input for receiving “one” audio input signal; also see more users in User Group C for receiving “more audio input signals”, Element 306, Fig.3); an audio output for providing one or more audio output signals ( see [0011]: “…an audio output module for outputting one or more audio mixes;…”); a communication module for providing a network communication link (see [0011]: “…a network connection module configured to send and receive audio signals over a network in substantially real-time; …”); a processor configured to receive the audio input signals (see Element 602, Fig. 6 , and [0063]: “…Node 600 includes CPU 602…The CPU exchanges control commands and data with input/output 604, …”); (a processor configured to) process the audio input signals to provide the audio output signals (a submix processing unit 610 is disclosed to process the audio input signals, thereby to provide the audio output signals as disclosed in the pathways from Input Channels, Element 608 to Output Channels, Element 612 in Fig.6, also see [0063]: “…The figure also illustrates control pathways from CPU 602 to input channel processing 608, submix processing 610, and output channel processing 612.”; Silfvast further discloses that the CPU 602, submix processing unit 601, and other components can be combined into a single processor, thereby a single processor can process the audio input signals to provide the audio output signals, see [0079]: “…The various components of a DSSN architecture-based end node may be implemented using various special purpose and/or customized processors, or by using general purpose processors or by using a combination of these… In some embodiments, all of the input processing, output processing, and mixing may be implemented on a single device having a single processor…”), wherein the processor is configured, from a common set of audio input signals, to provide (The CPU 602 controls the configuration of the audio processing of the end node, thereby controls “a common set” of audio signals from input channels 608, see [0063]: “…Host CPU 602 may also issue commands for configuring the audio processing distributed system. The figure also illustrates control pathways from CPU 602 to input channel processing 608, submix processing 610, and output channel processing 612.”), a first set of audio output signals to the electro-acoustic transducer (audio output signal F’ in Node 308, Fig.3, or audio output signals C’,D’,E’ in Node 306, Fig.3 is passed to an electro-acoustic transducer, e.g. speaker 338 or 340, Fig.3, via a local mixer, see [0038]: “End node 308 serves afront of house mix operator as well as the audience to which he delivers the main “house mix” via loudspeakers 338 and 340…”), such that the first set of audio output signals act as a monitor of the one or more audio input signals (the output signals serve as a monitor, see [0037]: “…Thus each end node, using its local mixer, is capable of producing independent mixes for local delivery to its user or users for listening. In addition, the self-monitoring path(s) for each end node can be optimized for lowest possible latency since the signal chain from audio Source to input process, to local mixer, to output process, to monitor output signal, is contained within the local end node and does not traverse the network.”), a second set of audio output signals via the network communication link (audio output signal F’ in Node 308, Fig.3, or audio output signals C’, D’, E’ in Node 306, Fig.3 is passed to a network interface in Fig.3, also see the same pathway in node 302, [0036]: “…The end node performs input processing at input processing module 324, and passes the processed signal (A') to local mixer 326 and network interface 328…”). The embodiments as shown in Fig. 3 of Silfvast do not teach the components, i.e. at least one electro-acoustic transducer (e.g. speaker 338 or 340 in Fig. 3), the audio input, the audio output, the communication module, and the processor of the audio system as set forth above, are all integrated in one portable speaker housing. However, paragraph [0039] of Silfvast explicitly suggests there are different embodiments, wherein the components in one embodiment, i.e. end node 308 as shown in Fig. 3, can be integrated into speaker 334 or 380 (see [0039]: “…the large mixer control panel 342 could communicate with designated end nodes, such as those embedded in loudspeakers…a configuration having a DSSN mixer end node inside each loudspeaker allows each loudspeaker to generate its own unique mix based on its location, acoustical environment, and proximity to certain listeners...”), based on the needs of certain listeners in a large concert venue (see [0039]: ”…to cover a large concert venue, a configuration having a DSSN mixer end node inside each loudspeaker allows each loudspeaker to generate its own unique mix based on its location, acoustical environment, and proximity to certain listeners..”). Clearly such a speaker can be portable, since it needs to be transported/moved to different locations, e.g. a concert venue (see [0039]: “…to cover a large concert venue…”), or based on the needs of “certain listeners” (see [0039]). Bose also teaches such practice is well-known in the art, i.e. integrating many components into one portable housing (see “…with a 3-channel mixer, reverb, Bluetooth® streaming and ToneMatch® processing onboard, it’s always ready to be your go-anywhere music system for nearly any occasion.”, pg.1, section “Product Overview”; also see pg.7, section “Connections and Controls” and the only figure). At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have further improved the speaker as taught by Silfvast and Bose to yield a “lightweight” and “rugged” portable speaker (see “Lightweight & Portable: …the rugged S1 Pro is designed to transport effortlessly from the car to the event.”, pg.4, section “Features and Benefits”). One of ordinary skill in the art would have been motivated to do so for transporting “effortlessly from the car to the event” (see “…the rugged S1 Pro is designed to transport effortlessly from the car to the event.”, pg.4, section “Features and Benefits”). Regarding claim 2, Silfvast in view of Bose teaches the portable loudspeaker, the processor, as previously described in claim 1’s 103 rejection. Silfvast further teaches that the idea of using a digital audio workstation(DAW) in the audio system is well-known in the art for playback/audio recording ( see[0072]: “…Digital audio workstation (DAW) 716 may contribute recorded tracks for playback during a live performance…The DAW may also be used to record all the tracks so that a concert can be archived or mixed down to a CD, DVD, or file made available for purchase after the show…”). Silfvast does not explicitly mention that a DAW is inside the processor for controlling the second set of output signals. However, Silfvast does teach that the processor in end node 308, Fig.3, can be integrated as a single processor(see [0079]: “The various components of a DSSN architecture-based end node may be implemented using various special purpose and/or customized processors, or by using general purpose processors or by using a combination of these… In some embodiments, all of the input processing, output processing, and mixing may be implemented on a single device having a single processor…”), Furthermore, the processor can perform reverb effects and equalization in output mix(see [0014]: “Rendering the output mix includes at least one of adding reverb effects and equalization.”). Official notice is taken that reverb effects and equalization are well-known functions in a DAW. Thereby, at the time of invention was effectively filed, it would have been obvious that Silfvast would have had used a DAW to perform these functions effectively, i.e. reverb effects and equalization, in output mix, in the processor for controlling the second set of output signals (The output mix is transmitted via the network communication link, i.e. the second set of output signals, see [0012]: “…The one or more processed output mixes are provided to the network connection module for transmission over the network…”). It would have yielded an improved speaker capable of improving “the sound quality of signals …” ([0047]). One of ordinary skill in the art would have been motivated to do so “…to enhance-or deliberately modify-the sonic character of the audio source, to make it more pleasing in the context of the overall mix or output signal delivered to an audience or user…” ([0047]). Regarding claim 3, Silfvast in view of Bose teaches the portable loudspeaker, the second set of audio output signals, the network communication link, as previously described in claim 1’s 103 rejection. Silfvast also teaches audio output signals are sent. (Output signals are delivered to the network with a range of end nodes connected, see Fig. 7; also see [0071]: “…The digitized, unprocessed signals are made available on the network for processing at another device, and a mix suitable for such an end node can be delivered to the network for output on the end node.”), to at least one of a digital audio workstation, a live stream, or a network-connected recording device (the output signals are sent to a network connected digital audio workstation(DAW) 716 in Fig. 7, a live stream AKA “tracks for playback during a live performance”, wherein the DAW can also record audio, see [0072:] “…Digital audio workstation (DAW) 716 may contribute recorded tracks for playback during a live performance…The DAW may also be used to record all the tracks so that a concert can be archived or mixed down to a CD, DVD, or file made available for purchase after the show…”), via the network communication link (see the network interface in Node 308, Fig. 3; also see the same pathway in node 302, [0036]: “…The end node performs input processing at input processing module 324, and passes the processed signal (A') to local mixer 326 and network interface 328…”). Regarding claim 4, Silfvast in view of Bose teaches the portable loudspeaker, as previous described in claim 1’s 103 rejection. Silfvast also teaches an amplifier configured to provide an amplified audio signal from at least one of the audio input signals (a preamplifier, one type of amplifier for amplifying input signal(s), may be included for audio input processing/amplifying for each input channel, see [0047]: “In each input channel (e.g., channel 404), the audio input is processed in the analog domain (via analog front end 408), which may in some instances include a preamplifier…”; each channel can receive multiple input signals, see [0047]: “Each of the received audio signals is fed to a designated input channel of one or more input channels 404, 406 of end node 400…”), an amplifier configured to provide an amplified audio signal from at least one of …the audio output signals, wherein the at least one electro-acoustic transducer is configured to provide an acoustic output based on the amplified audio signal (an amplifier is configured for output signal processing/amplifying, the processed/amplified audio signal is then delivered to an electro-acoustic transducer to provide an acoustical output, see [0066]: “…An example of an output-only end node would be a network-connected loudspeaker. By having an internal (local) mixer and output processing chain, this device can create its own custom mix for direct outputting to the device’s amplifier and transducer elements… each of which is configured to receive one signal and deliver that signal to its acoustical output…”). Regarding claim 5, Silfvast in view of Bose teaches the portable loudspeaker, the first set of audio output signals, the second set of audio output signals as previous described in claim 1’s 103 rejection. Silfvast also teaches “approximately simultaneously” (the first set of audio output signals can be done in zero latency delay, see [0034]: “…the monitoring could be done entirely in the analog domain, thus also avoiding the latency associated with analog to digital conversion and digital to analog conversion, resulting in zero latency monitoring…”; artificial delay can also be inserted in the same node/device for time aligning other signal paths, e.g. the second set of audio output signals, to provide two sets audio output signals “approximately simultaneously” , see [0034]: “…If some level of delay is desired for time aligning source signals with signal paths used to monitor other end nodes, an artificial delay can be inserted in the local end node's DSP path...”). Regarding claim 11, Silfvast in view of Bose teaches the portable loudspeaker as previous described in claim 1’s 103 rejection. Silfvast also teaches a mixer coupled with the audio input, wherein the audio input includes at least two inputs (a digital/analog mixer is coupled with the audio input via input channels, to process at least two source/input signals, see Element 418 and 420, Fig.4; see [0011]: “…a digital mixer for generating one or more output mixes by mixing the processed source audio signals received from the one or more channel strips with audio signals received via the network connection module…”; also see [0012]: “…An analog mixer for receiving one or more of the source audio signals in analog form and for mixing the one or more received audio signals in analog form…”; User Group C in Node 306, Fig.3 further shows at least two inputs are included in the audio input). Regarding claim 16, Silfvast in view of Bose teaches the portable loudspeaker, processing on a single processor, as previous described in claim 1’s 103 rejection. Silfvast further teaches processing the audio input signals includes adjusting…a relative signal level… of any of one or more of the audio input signals (All the sources/audio input signals’ relative volume, i.e. “a relative signal level”, can be adjusted as a form of audio input signal processing, see [0077]: “…From the mix control panel, the user can make relative volume adjustments for all the sources in the selected mix using "touch sliders" 1004…”), an equalization…of any of one or more of the audio input signals (signal processing includes equalization of received audio input signals, see “The audio processing unit of claim 1, wherein the channel strip processing of the received audio signals includes equalization.”, pg. 13, claim 14; also see [0014]: “…Conditioning the first audio signals includes at least one of rumble filtering, equalization, delaying, and insert processing…”), a reverb of any of one or more of the audio input signals (Reverberation can be included to enhance the audio source/input signals, see [0052]: “…This Zero-latency analog path can be supplemented with reverberation to enhance the audio source signals feeding into the analog mixer…”). Regarding claim 17, Silfvast in view of Bose teaches the portable loudspeaker, the processor, monitor, the audio output via the network communication link as previous described in claim 1’s 103 rejection. Silfvast also teaches the processor enables a user (a processor provides a user interface for a user/operator to control parameters, see [0012]: “…The audio processing unit further includes a processor for hosting a user interface, and the user interface enables an operator to control parameters of the one or more output mixes…”), provide audio output locally with the monitor (see [0036]: “…A local monitor mix is output from local mixer 326, undergoes output processing at output processing module 330 before being sent via an audio output module (not shown) via pathway 332 to an audio output device, such as headphones 334...”), while streaming or recording the audio output via the network communication link (meanwhile, the audio output is also being passed/streamed via the network communication link, see [0012]: “…The end node performs input processing at input processing module 324, and passes the processed signal (A') to local mixer 326 and network interface 328…”); synchronously (artificial delay can be inserted to align local monitoring and streaming via the network communication link, thereby to provide them “synchronously”, see [0034]: “…If some level of delay is desired for time aligning source signals with signal paths used to monitor other end nodes, an artificial delay can be inserted in the local end node's DSP path...”; this function can be performed by a user via the User Interface(UI), see [0078]: “…The described UI illustrates a simple UI example; many other functions that users may wish to control within a DSSN architecture system may also be included. However, the example serves to show that the networked nature of a DSSN system allows multiple users to control any or all of the many functions within a multi-node DSSN mixer system, from any end node (DSSN or other) on the network.”; on the same processor that implements various components including the User Interface, see [0079]: “The various components of a DSSN architecture-based end node may be implemented using various special purpose and/or customized processors, or by using general purpose processors or by using a combination of these… In some embodiments, all of the input processing, output processing, and mixing may be implemented on a single device having a single processor…”). Regarding claim 18, since the claimed method comprises the same operations conducted by the apparatus in claim 1, claim 18 is rejected as being unpatentable over Silfvast in view of Bose for the reasons mentioned in claim 1’s 103 rejection. Regarding claim 19, since the claimed method comprises the same operations conducted by the apparatus in claim 5, claim 19 is rejected as being unpatentable over Silfvast in view of Bose for the reasons mentioned in claim 5’s 103 rejection. Claims 6-8, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Silfvast et al. (US Pub. No. 2014/0064519 A1) in view of Bose (“S1 Pro Multi-position PA System Owner’s Guide”) in further view of Lambourne et al. (US Patent No. 7571014 B1). Regarding claim 6, Silfvast in view of Bose teaches the portable loudspeaker, the processor, the first set of audio output signals, the second set of audio output signals as described in claim 1’s 103 rejection. Silfvast in view of Bose does not teach independent adjustment Lambourne teaches independent adjustment (output audio signal characteristics on different player can be individually/distinctly controlled/ adjusted, see “The audio characteristics include, but are not limited to, audio volume, audio bass, and audio treble”, col4. Ln.14-15; also see “the present invention pertains to control of audio characteristics of a plurality of multimedia players, or simply players, from a controller. …In particular, the present invention enables the user to remotely control the audio characteristics of the players either as a group or as an individual player.”, col.2, ln. 24-31; wherein players refer to audio players/devices receiving different output signals, see “Each of the audio players has its own amplifier(s) and a set of speakers and typically installed in one place (e.g., a room).”, col.1, ln.36-38). At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have further improved the portable speaker as taught by Silfvast in view of Bose to independently adjust the two sets of output signals as taught by Lambourne. It would have yielded an improved speaker capable of “controlling Zone group and Zone group characteristics” (Abstract). One of ordinary skill in the art would have been motivated to do so for a convenient and homogenous audio environment (col.2, ln.9). Regarding claim 7, Silfvast in view of Bose teaches the portable loudspeaker, the processor, the first set of audio output signals, the second set of audio output signals as described in claim 1’s 103 rejection. Silfvast in view of Bose does not teach distinct volume control. Lambourne teaches distinct volume control (distinct volume can be applied to different players/outputs, see “the audio Volume control of a Zone group can be performed individually or synchronously as a group”, abstract, ln.13-14; also see Dining Room/Living Room Volume in Fig.7D) At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have further improved the portable speaker as taught by Silfvast in view of Bose by integrating distinct volume control as taught by Lambourne. It would have yielded an improved speaker capable of “controlling Zone group and Zone group characteristics” (abstract, ln.1). One of ordinary skill in the art would have been motivated to do so for a convenient and homogenous audio environment (col.2, ln.9). Regarding claim 8, Silfvast in view of Bose teaches the portable loudspeaker, the processor, the first set of audio output signals and the second set of audio output signals as described in claim 1’s 103 rejection. Silfvast in view of Bose does not teach “at least one of distinct equalization settings or distinct mix settings of one or both of…”. Lambourne teaches at least one of distinct equalization settings or distinct mix settings (The audio characteristics include distinct equalization settings or distinct mix settings, e.g. volume, bass, treble, loudness in Fig.7D, and balance in Fig.7D, see “The audio characteristics include, but are not limited to, audio volume, audio bass, and audio treble”, col4. Ln.14-15; these settings are distinct for different outputs, see Fig. 7D), “one or both of… (the first set of audio output signals and the second set of audio output signals)” (Volume, Treble, Bass, Loudness, Balance can be applied to different zone players/outputs, see Fig.7D). At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have further improved the portable speaker as taught by Silfvast in view of Bose to enable at least one of distinct equalization settings or distinct mix settings of one or both of audio output signals as taught by Lambourne. It would have yielded an improved speaker capable of “controlling Zone group and Zone group characteristics” (abstract, ln.1). One of ordinary skill in the art would have been motivated to do so for a convenient and homogenous audio environment (col.2, ln.9). Regarding claim 20, since the claimed method comprises the same operations conducted by the apparatus in claim 6, claim 20 is rejected as being unpatentable over Silfvast in view of Bose in further view of Lambourne for the reasons mentioned in claim 6’s 103 rejection. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Silfvast et al. (US Pub. No. 2014/0064519 A1) in view of Bose (“S1 Pro Multi-position PA System Owner’s Guide”) in further view of Klayman (US Patent No. 5,970,152). Silfvast in view of Bose teaches the portable loudspeaker, the second set of audio signals as previous described in claim 1’s 103 rejection. Silfvast in view of Bose does not teach a dual mono mix of multiple input channels. Klayman teaches multiple input channels (input signals are from multiple channels, see “The audio enhancement System 10 operates in connection with a Stereo Signal decoder 12 having multi-channel audio source signals.”, col2, ln.55-57); “…dual mono mix” (the input signals of multiple input channels can be from a monophonic source, see “input signals S1 and S2 in the equations above are typically Stereo Source Signals, but may also be Synthetically generated from a monophonic Source.”, col.8, ln.56-58; thereby the downstream mixing amplifiers can be called dual mono mix, “The amplifiers 276 and 282 operate as mixing amplifier which combine the processed difference signal with the sum signal and either the left or right input signal”, col.12, ln.29-31), “a…(dual mono mix)” (in addition, individual dual mono mixer can be included, see “Individual component audio signals generated from different pairs of original audio signals are then selectively combined to create a composite audio output signal. The composite audio output signal is then fed directly to a speaker for acoustic reproduction. The remaining audio output Signals are generated in a similar fashion by combining selected component audio signals.”, col.2, ln.11-17). At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have further improved the portable speaker as taught by Silfvast in view of Bose to include a dual mono mix of multiple input channels as taught by Klayman. It would have yielded an improved speaker with “audio enhancement” feature (col.1, ln.5). One of ordinary skill in the art would have been motivated to do so for “improving the realism and dramatic effects” (col.1, ln.2-3). Conclusion The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure. Redmann (US 20070140510 A1): teaches a similar apparatus comprising an audio output transducer Element 106, Fig.1; an audio input Element 110,110’, Fig.1; an audio output Element 142, Fig.1; a communication module/interface Element 120, 120’ Fig.1; a monitor ([0066]); a processor Element 108, Fig.2 configured to receive the audio input signal and to process the audio input signals to provide two sets of audio output signals Element 142, 120, Fig.1. Jung (US 6148085 A): teaches two audio signals are output simultaneously from a switch Element 30, Fig. 2 to different channels/outputs Element 7 and 9, Fig. 2 and distinct volume control Element 40 and 50, Fig. 2 of different output signals. Trammell (US 2011/0317841 A1): teaches dual mono mix, Element 125,127, Fig.1 of multiple input channels, Element 112,114, Fig.1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIN LEE whose telephone number is (571)272-1460. The examiner can normally be reached Monday thru Friday 8-5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIN LEE/Examiner, Art Unit 2695 /VIVIAN C CHIN/Supervisory Patent Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §103
Feb 02, 2026
Interview Requested
Feb 19, 2026
Applicant Interview (Telephonic)
Feb 19, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month