Prosecution Insights
Last updated: April 19, 2026
Application No. 18/789,337

SMART HUB

Final Rejection §102§103§112
Filed
Jul 30, 2024
Examiner
SAUNDERS JR, JOSEPH
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Proctor Consulting LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
538 granted / 740 resolved
+10.7% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 740 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is based on the communications filed January 28, 2026. Claims 21 – 46 are currently pending and considered below. Response to Arguments Applicant’s arguments, see page 7 of the Remarks, filed January 28, 2026, with respect to claim rejections under 35 U.C.S. 112(b) have been fully considered and are persuasive. The rejection has been withdrawn. However, the introduction of “a first wireless speaker device” creates new clarity issues addressed below. Applicant's arguments, see page 8 of the Remarks, filed January 28, 2026, with respect to claim rejections under 35 U.C.S. 102 have been fully considered but they are not persuasive. Under section A. 1. Applicant argues, “Korhonen does not disclose the claimed plurality of transceivers "including at least each of a Wi-Fi radio, and a Bluetooth radio." The Examiner respectfully disagrees, Korhonen was cited to teach both in stating, “Exemplary short distance wireless systems include Bluetooth, IrDa, and Wifi… Other short distance wireless communication protocols, such as… wireless LAN (IEEE 802.11) specification, or the like may alternatively or additionally be provided for direct communication between the participating devices.," Korhonen [0019], emphasis added by the Examiner. Under section A. 2. Applicant argues, “Korhonen does not disclose "acoustic equalization of the audio information signals" in the claimed arrangement.” The Examiner respectfully disagrees. Korhonen enables acoustic equalization when assigning channels in the form of frequency bands, “the participating mobile devices have different frequency responses, the master device may allocate channels to the participating devices based on their frequency responses (e.g., a channel with lower frequencies to a device with good response in the low frequency range, and so forth),” Korhonen [0031], “Alternatively, the assigned channels may be in the form of frequency bands, each frequency band representing only a portion of the bandwidth that a mobile device can reproduce, with participating devices being assigned a different frequency band. Thus, for example, one participating device may be assigned a high frequency band (such as 1000-20,000 Hz), a second mobile device may be assigned a mid frequency band (such as 150-999 Hz), and a third mobile device may be assigned a lower frequency band (such as 20-149 Hz), and so forth. In another embodiment, all the participant devices may be assigned to play the entire file, e.g., where the audio file is a single channel (mono) audio file,” Korhonen [0032]. Applicant further argues, “At minimum, Korhonen fails to disclose the required combination and arrangement of (i) acoustic channel state information determination, and (ii) acoustic equalization of audio information signals based on acoustic transmission/reception and wireless messaging association, as recited in claim 21.” Again the Examiner respectfully disagrees since as stated in the rejection enabling determination of (i) acoustic channel state information determination occurs when “the master device 12 may interrogate the other selected participant devices 14, 16, 18, 20 to determine a frequency response range of the respective device's loudspeaker,” [0031]. Further, the enabling of acoustic equalization discussed above is at least in part (ii) based on acoustic transmission/reception and wireless messaging association, as cited the transmission/reception of test pulses Korhonen [0034] and communication of wireless messages Korhonen [0014], [0020], [0029] as required to “interrogate the other selected participant devices 14, 16, 18, 20 to determine a frequency response range of the respective device's loudspeaker,” [0031]. In regards to section B, claim 22, “device processing” as argued in not required of the claim language. In regards to section B, claim 23, Korhonen discloses “the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room,” Korhonen [0032] and “the MDPC assigns the channels by notifying each mobile device which channel-it will play,” Korhonen [0032] and therefore based, at least in part upon the acoustic channel state information. In regards to section B, claim 39, Korhonen discloses “television transmission, to the mobile device,” Korhonen [0014], and therefore since the mobile device receives television transmission a television is recited. In regards to section B, claim 45, according to Applicant’s specification Wi-Fi Direct protocol is direct Wi-Fi communication links which is taught by “direct wireless link” and “Wi-Fi radio”. In regards to section D, claim 42, Korhonen was disclosed for teaching the argued automation features not Backman. Applicant’s arguments, see Section B claims 31 and 32, Section C claim 34, and Section E claim 46/Section F as it pertains to claim 46 have been fully considered and are persuasive. The rejection of claims 31 – 34 and 46 has been withdrawn. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21 – 46 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 21 is amended to recite “a first wireless speaker device” on line 5 however previously recited “A wireless speaker device” on line 1 and therefore it is unclear as to whether “a first wireless speaker device” is the same or different than “A wireless speaker device”. Claims 22 – 46 further recite “The device of” on line 1 of each of the respective claims. Therefore it is further unclear as to whether “The device of” on line 1 of each of the respective claims is referring to “a first wireless speaker device” or “A wireless speaker device”. Appropriate correction and/or clarification is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 21 – 30, 35, 39, 41, and 45 is/are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Korhonen (US 2008/0045140 A1), hereinafter Korhonen. Claim 21: Korhonen discloses a wireless speaker device comprising: a plurality of wireless radio transceivers for communicating wireless messages, including at least each of a Wi-Fi radio, and a Bluetooth radio (see at least, “The mobile device 12 includes a transceiver 64, associated with the antenna 22, which serves as an interface for short distance and long distance wireless communication and is in communication with the CPU 50. Short distance wireless communication here denotes wireless communication over a short distance, for example, about 100 meters or less, typically, about 10 meters (i.e., too short to reach the communication station 26, but long enough to reach the other participating mobile devices). Exemplary short distance wireless systems include Bluetooth, IrDa, and Wifi. In the Bluetooth interface protocol, a maximum communication speed is 1 Megabit per second, and a frequency of 2.4 GHz is used, the frequency being equal to that used in the IEEE 802.llb system wireless LAN. Other short distance wireless communication protocols, such as a UWB specification, an infrared-ray communication specification, or a wireless LAN (IEEE 802.11) specification, or the like may alternatively or additionally be provided for direct communication between the participating devices.,” Korhonen [0019], “As will be appreciated, separate transceivers and/or antennae may be provided for the short and longer distance transmissions, respectively,” Korhonen 0019]); one or more microphones (see at least, “The mobile device 12 also includes a conventional earphone 34, a loudspeaker 36, a microphone 38, a user interface 40 in the form of a keypad, and a rechargeable battery 42,” Korhonen [0015]); a first wireless speaker device including at least one loudspeaker (see at least, “The mobile device 12 also includes a conventional earphone 34, a loudspeaker 36, a microphone 38, a user interface 40 in the form of a keypad, and a rechargeable battery 42,” Korhonen [0015]); an audio information interface for providing one or more audio information signals for use in generation of respective acoustic signals by respective loudspeakers (see at least, “In playing a music file in a multi-device mode, such as a multi-channel mode, one of the mobile devices (device 12 in the exemplary embodiment) serves as a master device while the other participating mobile devices (14, 16, 18, 20) are slaves. In general, the master device 12 is used for selection of the audio file, communicating with the other mobile devices selected as participants, optionally, transferring a copy of the audio file or selected channel thereof between the master device and one or more of the other participant devices, and prompting the other mobile devices to begin play, thereby achieving synchronizing playing of the file or of respective channels of the audio file by the selected participant devices,” Korhonen [0013], “At step S102, a user selects a multi-channel audio file to be played. For example, using the menu button, the user scrolls though a list of audio files, such as music files and selects, e.g., by clicking on, one or more of the files to be played. Where the selected file is not already resident in the memory of the user's device 12, the mobile phone may communicate wirelessly with the server 28 which transfers the file to the master device's memory 56 (Step S104). If the file is already resident on the device 12, this step can be omitted,” Korhonen [0027], “The server, in tum, provides audio data, such as music files, audio/video files, and/or streamed radio/television transmission, to the mobile device 12 via the network,” [0014]),“Once the user has selected the audio file(s) and participants and requested play (e.g., by pressing a button on the user interface), the remaining steps may proceed automatically under the control of the CPU/MDPC, except as otherwise noted,” Korhonen [0028]); and one or more processors configured to execute program code to (see at least, “The various components communicate via suitable data/control buses. Memory 52 stores instructions for the processor 50, including instructions for performing the exemplary method outlined below and illustrated in FIG. 4,” Korhonen [0017], “The mobile device 12 also includes a multi-device processing component (MDPC) 70 for initiating and playing an audio file in a multi device, e.g., multi-channel mode. The MDPC 70 may be a part of the CPU or an add-on component in communication therewith, as shown. The MDPC 70 is configured for executing instructions, stored in memory 52, for selection of a multi-channel audio file, selection of participating mobile devices, allocating channels of the selected file to the participating mobile devices (where multi-channel play is selected), communicating with the other participating mobile devices, and prompting them to play the audio file in a synchronized fashion,” [0022]): enable determination of acoustic channel state information relevant to a second wireless speaker device wherein the second wireless speaker device also includes at least one loudspeaker, and wherein the at least one loudspeaker of the first wireless speaker device and the at least one loudspeaker of the second wireless speaker device comprise the respective loudspeakers; and enable acoustic equalization of the one or more audio information signals based, at least in part, upon each of (see at least, “At step S114, the master device 12 may interrogate the other selected participant devices 14, 16, 18, 20 to determine a frequency response range of the respective device's loudspeaker. Where the participating mobile devices have different frequency responses, the master device may allocate channels to the participating devices based on their frequency responses (e.g., a channel with lower frequencies to a device with good response in the low frequency range, and so forth),” Korhonen [0031], “Compare Frequency Responses of Participating Devices,” S114, Korhonen FIG. 4, “At step S116, the participating devices may each be assigned a channel of the multi-channel file (and/or a frequency range) for multi-channel play. Where there are more devices than channels in an audio file, two or more devices may be assigned the same channel. Where there are fewer devices than channels, a mobile device may be assigned two or more channels. In one embodiment, the MDPC assigns the channels by notifying each mobile device which channel-it will play. In another embodiment, the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room. For example, the user may assign audio channels to mobile phones in front left and right locations to serve as front speakers, and assign corresponding rear speaker audio channels of the audio file to mobile phones in rear left and right locations, relative to the user, as well as a fifth mobile phone as a central channel, to resemble surround sound speakers. For stereo, two channels may be used, representing left and right speakers. Alternatively, the assigned channels may be in the form of frequency bands, each frequency band representing only a portion of the bandwidth that a mobile device can reproduce, with participating devices being assigned a different frequency band. Thus, for example, one participating device may be assigned a high frequency band (such as 1000-20,000 Hz), a second mobile device may be assigned a mid frequency band (such as 150-999 Hz), and a third mobile device may be assigned a lower frequency band (such as 20-149 Hz), and so forth. In another embodiment, all the participant devices may be assigned to play the entire file, e.g., where the audio file is a single channel (mono) audio file,” Korhonen [0032]): transmission of at least one of the respective acoustic signals by one of the respective loud speakers (see at least, “In another embodiment, the master device may compute the times at which the play initiation pulses should be sent out from response times to a test signal. In this embodiment, the master device sends a test signal, such as a pulse, to each of the devices,” Korhonen [0034], “At step S122, the master device initiates the replay of the audio file or selected play list. For example, the master device may send a signal, such as synchronization pulse, to the slave mobile devices to start the joint replay. The signal may be sent as an audible signal via the air which is sufficiently loud to be picked up by the microphone of the mobile devices,” Korhonen [0038]); and reception of one or more received acoustic signals, as transmitted from the respective loudspeakers devices using the one or more microphones (see at least, “The slave devices are instructed, prior to sending the test pulse, to initiate play of a test sound when they receive the test pulse. The master device then records a time at which the test sound is received by the master device after the test pulse has been sent and computes the delay,” Korhonen [0034], “The mobile device 12 also includes a conventional earphone 34, a loudspeaker 36, a microphone 38, a user interface 40 in the form of a keypad, and a rechargeable battery 42,” Korhonen [0015]); and communication of a wireless message associated with the second wireless speaker device (see at least, “The mobile device 12 may communicate wirelessly with other voice communication devices, such as devices 14, 16, 18, 20, via a wireless network 24 which includes one or more communication stations 26,” Korhonen [0014], “Instructions may be sent automatically from the master device 12 to the slave devices 14, 16, 18, 20. For example, the Short Message Service (SMS), which provides mobile devices with the ability to send text messages from one mobile device to one or more other mobile devices may be utilized. Another service which may be utilized is MMS (multimedia message service), which enables a message with a combination of sound, text and pictures to be sent between MMS-compatible mobile devices,” Korhonen [0020], “At step S110, a communication link is established whereby the MDPC 70 communicates with the selected mobile devices, for example using the Bluetooth interface. Alternatively, the MDPC may communicate wirelessly, via the base station 26 or by a cable link between the master and slave mobile devices,” Korhonen [0029]). Claim 22: Korhonen discloses the device of claim 21 wherein the processor is further configured to enable determination of a relative location of the second wireless speaker device with respect to the first wireless speaker device (see at least, “In one embodiment, the MDPC assigns the channels by notifying each mobile device which channel-it will play. In another embodiment, the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room. For example, the user may assign audio channels to mobile phones in front left and right locations to serve as front speakers, and assign corresponding rear speaker audio channels of the audio file to mobile phones in rear left and right locations, relative to the user, as well as a fifth mobile phone as a central channel, to resemble surround sound speakers. For stereo, two channels may be used, representing left and right speakers,” Korhonen [0032]). Claim 23: Korhonen discloses the device of claim 22 wherein the determination of a relative location of the second wireless speaker device with respect to the first wireless speaker device is based, at least in part upon the acoustic channel state information (see at least, “At step S116, the participating devices may each be assigned a channel of the multi-channel file (and/or a frequency range) for multi-channel play. Where there are more devices than channels in an audio file, two or more devices may be assigned the same channel. Where there are fewer devices than channels, a mobile device may be assigned two or more channels. In one embodiment, the MDPC assigns the channels by notifying each mobile device which channel-it will play. In another embodiment, the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room,” Korhonen [0032]). Claim 24: Korhonen discloses the device of claim 23 wherein the determination of acoustic channel state information relevant to the second wireless speaker device is performed, at least in part, by a processor on another device (see at least, “In another embodiment, the slave devices are instructed to each compute their own start time such that the loudspeakers all begin play a specific time after they receive the play initiation pulse, such as a one second delay or a five second delay. The instructions as to when to begin after a play initiation pulse may be part of the software, and thus resident on the slave devices,” Korhonen [0035], “The various components communicate via suitable data/control buses. Memory 52 stores instructions for the processor 50, including instructions for performing the exemplary method outlined below and illustrated in FIG. 4,” Korhonen [0017]). Claim 25: Korhonen discloses the device of claim 22 wherein the audio information signals include at least a first audio information signal provided to the at least one loudspeaker of the first wireless speaker device, and a second audio information signal provided to the at least one loudspeaker of the second wireless speaker device (see at least, “At step S118, the participating devices 12, 14, 16, 18, 20 are synchronized to play their respective channels. Synchronization ensures that the mobile devices all play the audio file contemporaneously (at least to the user's ear). Synchronization may take into account the standard delay of the speaker system of each device and/or any other delays occurring between a device receiving a play command and the actual start of play,” Korhonen [0033], “At step S120, the participating mobile devices may extract the assigned channel from the audio file. For example, the MDPC of each participating device uses available embedded audio signal processing (e.g., an RF bandwidth limiting filter) to filter the correct part of the audio file for replay. As will be appreciated, the extraction of the assigned channel may begin as soon as the channel has been assigned,” Korhonen [0037], “At step S122, the master device initiates the replay of the audio file or selected play list. For example, the master device may send a signal, such as synchronization pulse, to the slave mobile devices to start the joint replay. The signal may be sent as an audible signal via the air which is sufficiently loud to be picked up by the microphone of the mobile devices,” Korhonen [0038]). Claim 26: Korhonen discloses the device of claim 25 wherein the acoustic equalization is applied to provide at least the second audio information signal (see at least, “At step S114, the master device 12 may interrogate the other selected participant devices 14, 16, 18, 20 to determine a frequency response range of the respective device's loudspeaker. Where the participating mobile devices have different frequency responses, the master device may allocate channels to the participating devices based on their frequency responses (e.g., a channel with lower frequencies to a device with good response in the low frequency range, and so forth),” Korhonen [0031], “Compare Frequency Responses of Participating Devices,” S114, Korhonen FIG. 4, “Alternatively, the assigned channels may be in the form of frequency bands, each frequency band representing only a portion of the bandwidth that a mobile device can reproduce, with participating devices being assigned a different frequency band. Thus, for example, one participating device may be assigned a high frequency band (such as 1000-20,000 Hz), a second mobile device may be assigned a mid frequency band (such as 150-999 Hz), and a third mobile device may be assigned a lower frequency band (such as 20-149 Hz), and so forth. In another embodiment, all the participant devices may be assigned to play the entire file, e.g., where the audio file is a single channel (mono) audio file,” Korhonen [0032]). Claim 27: Korhonen discloses the device of claim 22 wherein the audio equalization includes spatial equalization related to at least the determined relative location of the second wireless speaker device (see at least, “In one embodiment, the MDPC assigns the channels by notifying each mobile device which channel-it will play. In another embodiment, the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room. For example, the user may assign audio channels to mobile phones in front left and right locations to serve as front speakers, and assign corresponding rear speaker audio channels of the audio file to mobile phones in rear left and right locations, relative to the user, as well as a fifth mobile phone as a central channel, to resemble surround sound speakers. For stereo, two channels may be used, representing left and right speakers,” Korhonen [0032], “At step S118, the participating devices 12, 14, 16, 18, 20 are synchronized to play their respective channels. Synchronization ensures that the mobile devices all play the audio file contemporaneously (at least to the user's ear). Synchronization may take into account the standard delay of the speaker system of each device and/or any other delays occurring between a device receiving a play command and the actual start of play,” Korhonen [0033]). Claim 28: Korhonen discloses the device of claim 27 wherein the spatial equalization includes controlling which known speaker source information is included within an audio information signal sent to the second wireless speaker device (see at least, “In one embodiment, the MDPC assigns the channels by notifying each mobile device which channel-it will play. In another embodiment, the user is queried as to which device is to be assigned a particular channel in order to provide surround sound, which may depend on the positions of the participating devices in a room. For example, the user may assign audio channels to mobile phones in front left and right locations to serve as front speakers, and assign corresponding rear speaker audio channels of the audio file to mobile phones in rear left and right locations, relative to the user, as well as a fifth mobile phone as a central channel, to resemble surround sound speakers. For stereo, two channels may be used, representing left and right speakers. Alternatively, the assigned channels may be in the form of frequency bands, each frequency band representing only a portion of the bandwidth that a mobile device can reproduce, with participating devices being assigned a different frequency band. Thus, for example, one participating device may be assigned a high frequency band (such as 1000-20,000 Hz), a second mobile device may be assigned a mid frequency band (such as 150-999 Hz), and a third mobile device may be assigned a lower frequency band (such as 20-149 Hz), and so forth. In another embodiment, all the participant devices may be assigned to play the entire file, e.g., where the audio file is a single channel (mono) audio file,” Korhonen [0032]). Claim 29: Korhonen discloses the device of claim 21 wherein the respective acoustic signals comprise actual media audio content provided to the first wireless speaker device without the use of stand alone training signals (see at least, “In yet another embodiment, all the devices are simply instructed to begin play as soon as they receive the synchronization pulse (or at a preselected time thereafter). This embodiment ignores the delays of the individual mobile phones and differences in time for the sound to travel, but may nonetheless provide satisfactory synchronization when the standard delay times are relatively similar and the spacing of the devices is fairly close, such as within about 5 meters of the master device,” Korhonen [0036]). Claim 30: Korhonen discloses the device of claim 21 wherein the respective acoustic signals comprise training signals provided to the wireless speaker device (see at least, “In another embodiment, the master device may compute the times at which the play initiation pulses should be sent out from response times to a test signal. In this embodiment, the master device sends a test signal, such as a pulse, to each of the devices,” Korhonen [0034]). Claim 35: Korhonen discloses the device of claim 21 wherein the processor is further configured to: communicate with a second wireless speaker device utilizing at least one of the wireless radio transceivers, to provide simultaneous audio playback of streaming content utilizing both the at least one loudspeaker located within the wireless speaker device and the at least one loudspeaker local to the second wireless speaker device (see at least, “With reference to FIG. 1, an exemplary audio reproduction system 10 for sound reproduction of digital audio data is shown. The system 10 includes a group of mobile devices 12, 14, 16, 18, 20, each having wireless data transmitting and receiving capability and audio output capability. The mobile devices are used in concert to reproduce an audio file. Each of the mobile devices 12, 14, 16, 18, 20 may be assigned an audio channel of the file whereby multi-channel sound reproduction is achieved,” Korhonen [0011], “At step S122, the master device initiates the replay of the audio file or selected play list. For example, the master device may send a signal, such as synchronization pulse, to the slave mobile devices to start the joint replay. The signal may be sent as an audible signal via the air which is sufficiently loud to be picked up by the microphone of the mobile devices,” Korhonen [0038]). Claim 39: Korhonen disclose the device of claim 21 additionally comprising a display device that is part of a television (see at least, “The server, in turn, provides audio data, such as music files, audio/video files, and/or streamed radio/television transmission, to the mobile device 12 via the network,” Korhonen [0014], “Under general command of the CPU, audio output based on the audio files stored in memory 56, 52 is provided through the loudspeaker 36 and optionally video output based on the audio/video files is provided through the display 32,” Korhonen [0021]). Claim 41: Korhonen discloses the device of claim 21 wherein the equalization includes synchronizing the audio between the first wireless speaker device and the second wireless speaker device (see at least, “At step S118, the participating devices 12, 14, 16, 18, 20 are synchronized to play their respective channels. Synchronization ensures that the mobile devices all play the audio file contemporaneously (at least to the user's ear). Synchronization may take into account the standard delay of the speaker system of each device and/or any other delays occurring between a device receiving a play command and the actual start of play,” Korhonen [0033]). Claim 45: Korhonen discloses the device of claim 21 wherein the communications between the second wireless speaker device and the first wireless speaker device utilize Wi-Fi direct protocol (see at least, “Wireless transfer may be effected though a direct wireless link,” Korhonen [0011], “The mobile device 12 includes a transceiver 64, associated with the antenna 22, which serves as an interface for short distance and long distance wireless communication and is in communication with the CPU 50. Short distance wireless communication here denotes wireless communication over a short distance, for example, about 100 meters or less, typically, about 10 meters (i.e., too short to reach the communication station 26, but long enough to reach the other participating mobile devices). Exemplary short distance wireless systems include Bluetooth, IrDa, and Wifi. In the Bluetooth interface protocol, a maximum communication speed is 1 Megabit per second, and a frequency of 2.4 GHz is used, the frequency being equal to that used in the IEEE 802.llb system wireless LAN. Other short distance wireless communication protocols, such as a UWB specification, an infrared-ray communication specification, or a wireless LAN (IEEE 802.11) specification, or the like may alternatively or additionally be provided for direct communication between the participating devices,” Korhonen [0019]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 38, 40, and 42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korhonen in view of Backman et al. (US 2014/0362995 A1), hereinafter Backman. Claim 38: Korhonen disclose the device of claim 21 but does not disclose wherein the wireless speaker device includes a microphone array. However, Backman discloses a similar method and apparatus for location based loudspeaker system configuration. Backman further discloses wherein the wireless speaker device includes a microphone array (see at least, “(such as a speaker having active multiple microphones and required DSP capabilities [for beamforming]), and any "dummy" object can be located,” Backman [0050], “The master audio device 510 additionally may have one or more microphones 510H and in some embodiments also a camera 5101. All of these are powered by a portable power supply such as the illustrated galvanic battery,” Backman [0057]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned “speaker having active multiple microphones and required DSP capabilities [for beamforming]” as disclosed by Backman in the invention of Korhonen thereby allowing for the advantage of taking “into account the acoustics of the loudspeakers and the listening room, so further corrections may be used,” Backman [0048]. Claim 40: Korhonen disclose the device of claim 21 wherein the second wireless speaker device is part of a display device (see at least, “In general, all of the devices 12, 14, 16, 18, 20 include the software instructions for prompting other devices to play the same audio file and for playing an audio file when prompted by another mobile device to do so. Accordingly, any of the devices may serve as the master device, depending on the user's preference,” Korhonen [0013], “As illustrated in FIG. 2, the mobile device 12 also includes a display 32, such as a liquid crystal display (LCD) screen, which serves as a graphical user interface for selection of audio data files from a list and selection of other mobile device participants in the audio system,” Korhonen [0015]) but does not disclose the one or more microphones comprise a microphone array. However, Backman discloses a similar method and apparatus for location based loudspeaker system configuration. Backman further discloses wherein the one or more microphones comprise a microphone array (see at least, “(such as a speaker having active multiple microphones and required DSP capabilities [for beamforming]), and any "dummy" object can be located,” Backman [0050], “The master audio device 510 additionally may have one or more microphones 510H and in some embodiments also a camera 5101. All of these are powered by a portable power supply such as the illustrated galvanic battery,” Backman [0057]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned “speaker having active multiple microphones and required DSP capabilities [for beamforming]” as disclosed by Backman in the invention of Korhonen thereby allowing for the advantage of taking “into account the acoustics of the loudspeakers and the listening room, so further corrections may be used,” Backman [0048]. Claim 42: Korhonen disclose the device of claim 21 wherein the wireless speaker device comprises a television (see at least, “The server, in turn, provides audio data, such as music files, audio/video files, and/or streamed radio/television transmission, to the mobile device 12 via the network,” Korhonen [0014], “Under general command of the CPU, audio output based on the audio files stored in memory 56, 52 is provided through the loudspeaker 36 and optionally video output based on the audio/video files is provided through the display 32,” Korhonen [0021]), and control and automation of other devices is attached to the same local network (see at least, “The mobile device 12 may communicate wirelessly with other voice communication devices, such as devices 14, 16, 18, 20, via a wireless network 24 which includes one or more communication stations 26,” Korhonen [0014]), allowing for time and event based triggering of control of the other devices (see at least, At step S118, the participating devices 12, 14, 16, 18, 20 are synchronized to play their respective channels. Synchronization ensures that the mobile devices all play the audio file contemporaneously (at least to the user's ear). Synchronization may take into account the standard delay of the speaker system of each device and/or any other delays occurring between a device receiving a play command and the actual start of play,” Korhonen [0033]) but does not disclose a smart hub. However, Backman discloses a similar method and apparatus for location based loudspeaker system configuration. Backman further discloses a smart hub (see at least, “Also the listening location can be determined by the portable device, which can used for playback or which acts as a connection hub for the loudspeaker system,” Backman [0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned “hub” as disclosed by Backman in the invention of Korhonen thereby allowing for the advantage of taking “into account the acoustics of the loudspeakers and the listening room, so further corrections may be used,” Backman [0048]. Claim(s) 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korhonen in view of Bates et al. (US 2013/0317635 A1), hereinafter Bates. Claim 36: Korhonen discloses the device of claim 21 but does not disclose wherein the processor is further configured to: perform voice command recognition on acoustic signals from the one or more microphones, to determine at least one connected speaker command; and in response to the at least one connected speaker command, select particular content from one of a plurality of content sources as streaming content for playback to both of the wireless speaker devices simultaneously. However, Bates discloses similar systems and methods for playback of audio content and further discloses wherein the processor is further configured to: perform voice command recognition on acoustic signals from the one or more microphones, to determine at least one connected speaker command; and in response to the at least one connected speaker command, select particular content from one of a plurality of content sources as streaming content for playback to both of the wireless speaker devices simultaneously (see at least, “Controller 500 is provided with a screen 502 and an input interface 514 that allows a user to interact with the controller 500, for example, to navigate a playlist of many multimedia items and to control operations of one or more zone players. The input interface 514 may be coupled to a microphone 516 for capturing audio signals, such as audio content or voice commands as control inputs,” Bates [0057], “to play the music in multiple listening zones simultaneously, such that the music in each listening zone may be synchronized, without audible echoes or glitches,” Bates [0017], “The system may further be configured to operate in an "audition mode" such that a user may preview tracks or songs, radio stations, and streaming content,” Bates [0020]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned voice control features of Bates in the invention of Korhonen thereby allowing for the advantage of people to easily, “For example, in a household,… play music out loud at parties and other social gatherings,” Bates [0017]). Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korhonen in view of Mertens (US 2007/0177743 A1), hereinafter Mertens. Claim 37: Korhonen discloses the device of claim 21 but does not disclose wherein the processor is further configured to modify audio content settings based upon current video content. However, Mertens discloses in regards to audio volume control, wherein the processor is further configured to modify audio content settings based upon current video content (see at least, “The level control unit 18 schematically illustrated in FIG. 3 may comprise suitable processing means for processing the control signal and the channel signal so as to produce a suitable amplifier control signal for the amplifier 17,” Mertens [0053], “In a particularly advantageous embodiment, the levels depend on the channel content and/or on the channel signal characteristics,” Mertens [0057], “Other ways of determining content could be based upon the analysis of video or still images associated with the audio content, for example in the case of television,” Mertens [0059], “In the above discussion it has been assumed that the sound level adjustment of the various channels involved gain adjustment, that is, the signal of the channel is multiplied with a suitable gain factor (typically smaller than 1), resulting in the desired sound level,” Mertens [0061]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned features of Mertens in the invention of Korhonen thereby allowing for the advantage of automatically modifying audio content settings “during a commercial break in a television program, in which case so-called commercial detection by other means may help in automatically selecting the lowest of a set of user ratio preferences,” Mertens [0066]). Claim(s) 43 and 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korhonen and Backman in view of Bates. Claim 43: Korhonen and Backman disclose the device of claim 42 wherein one or more settings of the smart hub are configurable (see at least, “Also the listening location can be determined by the portable device, which can used for playback or which acts as a connection hub for the loudspeaker system,” Backman [0031], “According to various exemplary embodiments of the invention, one common use case where determining the location of devices with respect to the user's location is in multi-channel sound reproduction with loudspeakers, where at least the propagation delay, and possibly also sound level and/or equalization settings are adjusted to correspond to the distance between the user and the sound sources, and the channel selection generally conforms to the physical arrangement of the loudspeakers in the listening space (for example, left and right or front and back may not be reversed),” Backman [0030]) but does not disclose by the television screen, and a remote control. However, Bates discloses similar systems and methods for playback of audio content and further discloses by the television screen, and a remote control (see at least, “For example, the zone player 400 could be constructed as part of a television, lighting, or some other device for indoor or outdoor use,” Bates [0049], “FIG. 3 illustrates an example wireless controller 300 in docking station 302. By way of illustration, controller 300 can correspond to controlling device 130 of FIG. 1. Docking station 302, if provided, may be used to charge a battery of controller 300. In some embodiments, controller 300 is provided with a touch screen 304 that allows a user to interact through touch with the controller 300, for example, to retrieve and navigate a playlist of audio items, control operations of one or more zone players, and provide overall control of the system configuration 100,” Bates [0034]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned remote control features of Bates in the invention of Korhonen and Backman thereby allowing for the advantage of remote “overall control of the system configuration,” [0034]. Claim 44: Korhonen and Backman disclose the device of claim 42 wherein one or more settings of the smart hub are configurable (see at least, “Also the listening location can be determined by the portable device, which can used for playback or which acts as a connection hub for the loudspeaker system,” Backman [0031], “According to various exemplary embodiments of the invention, one common use case where determining the location of devices with respect to the user's location is in multi-channel sound reproduction with loudspeakers, where at least the propagation delay, and possibly also sound level and/or equalization settings are adjusted to correspond to the distance between the user and the sound sources, and the channel selection generally conforms to the physical arrangement of the loudspeakers in the listening space (for example, left and right or front and back may not be reversed),” Backman [0030],) but does not disclose by the television screen, or by a smart phone. However, Bates discloses similar systems and methods for playback of audio content and further discloses by the television screen, or by a smart phone (see at least, “For example, the zone player 400 could be constructed as part of a television, lighting, or some other device for indoor or outdoor use,” Bates [0049], “FIG. 3 illustrates an example wireless controller 300 in docking station 302. By way of illustration, controller 300 can correspond to controlling device 130 of FIG. 1. Docking station 302, if provided, may be used to charge a battery of controller 300. In some embodiments, controller 300 is provided with a touch screen 304 that allows a user to interact through touch with the controller 300, for example, to retrieve and navigate a playlist of audio items, control operations of one or more zone players, and provide overall control of the system configuration 100,” Bates [0034], “In addition, an application running on any network-enabled portable device, such as an iPhone™, iPad™, Android™ powered phone, or any other smart phone or network-enabled device can be used as controller 130,” Bates [0036]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned remote control features of Bates in the invention of Korhonen and Backman thereby allowing for the advantage of remote “overall control of the system configuration,” [0034]. Allowable Subject Matter Claims 31 – 34 and 46 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SAUNDERS whose telephone number is (571)270-1063. The examiner can normally be reached Monday-Thursday, 9:00 a.m. - 4 p.m., EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at (571)270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH SAUNDERS JR/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Aug 09, 2025
Non-Final Rejection — §102, §103, §112
Jan 28, 2026
Response Filed
Mar 07, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596883
Audio Analysis for Text Generation
2y 5m to grant Granted Apr 07, 2026
Patent 12598420
AUDIO DEVICE WITH ELECTROSTATIC DISCHARGE PROTECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12593190
User Experience Localizing Binaural Sound During a Telephone Call
2y 5m to grant Granted Mar 31, 2026
Patent 12585425
Light-function audio parameters
2y 5m to grant Granted Mar 24, 2026
Patent 12585422
DATA PROCESSING METHOD OF PROCESSING MULTITRACK AUDIO DATA AND DATA PROCESSING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 740 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month