DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to claim amendments / remarks filed by Applicant’s representative via an RCE on January 5, 2026.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's RCE submission filed on January 5, 2026 has been entered.
Response to Amendments and Remarks
With regards to Applicant’s latest claim amendments and associated remarks filed on January 5, 2026, the Office notes that Applicant’s remarks / comments are generally directed to the latest filed amended claim features (amended independent claims 1, 9 and 10). In this regard, the Office has fully considered the latest filed claim amendments and corresponding remarks, but finds the claim amendment(s) unpersuasive to overcome the current rejection of the claims over the applied prior art or prior art combination, as the amended and/or argued claim feature{s} appear to be still taught or disclosed by one or more of the applied prior art reference(s), as previously cited in the last office action.
With respect to amended independent claims 1, 9, and 10 and claim 1 in particular, Applicant notes and remarks that the claim{s} have been amended to now recite
“A method performed by a first electronic device, the method comprising:
displaying, on a display of a first electronic device, identification information of one or more candidate electronic devices;
determining at least one second electronic device, from among the one or more candidate electronic devices, based on a selection by a user;
capturing first audio data by the first electronic device;
receiving second audio data captured by the at least one second electronic device;
acquiring audio data by mixing the first audio data captured by the first electronic device and second audio data captured by the at least one second electronic device;
generating target audio data based on the acquired audio data; and
providing the target audio data to a target application”.
With respect to the independent claim(s), and amended independent claim 1 in particular, Applicant firstly notes and/or remarks that Yu and/or Seo fails to teach or disclose the newly amended feature(s) of “capturing first audio data by the first electronic device; receiving second audio data captured by the at least one second electronic device; acquiring audio data by mixing the first audio data captured by the first electronic device and second audio data captured by the at least one second electronic device …”, as currently recited by the amended claim, and the independent claims are thus distinguishable over Yu. Applicant additionally remarks that the respective dependent are also distinguishable over the applied prior art by virtue of their dependency upon their respective parent independent claims.
However, in response to Applicant’s amendments and corresponding remarks, the Office notes and asserts that the amended features / limitations of the amended independent claims appear to still be expressly taught in view of additional disclosures or teachings by Zhang, as further cited and discussed in a same ground of rejection included with this Office action (below).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 7, 8, 10 is/are rejected under 35 U.S.C. 103 as being disclosed by Yu et al (hereinafter Yu), US Patent Publication 20230297324 A1 (filing date July 2021) in view of Zhang et al (hereinafter Zhang), US Patent Pub 20230049548 A1 (publication date February 2023).
As per claim{s} 1, 9, 10, Yu discloses a method performed by a first electronic device, the method comprising:
displaying, on a display of a first electronic device (Yu: e.g., Mobile Phone / Master Device_101) [0457, 0459-0460; Fig. 1], identification information of one or more candidate electronic devices (Yu: e.g., Optional {candidate} Audio Input Devices_2804 ) [Fig. 28(c) ] (e.g., as shown in FIG. 28(c), the mobile phone may display a Device list 2804 including ‘candidate devices’ each having an audio input function, where a current audio input device of the mobile phone is also the sound box. The user may switch an audio input device of the mobile phone in the device list 2804. In this way, the user may respectively switch the audio output capability and the audio input capability of the mobile phone to corresponding devices according to a user requirement, to respectively control the audio output function and the audio input function of the mobile phone. This improves user audio experience) [0621; Fig. 28(c)];
determining at least one second electronic device, from among the one or more candidate electronic devices, based on a selection by a user (Yu: e.g., expressly discloses / illustrates in one aspect wherein the ‘Sound box’ device is ‘selected’ from among the candidate list of ‘Optional Audio Input devices’ displayed by the Mobile Phone / Master Device ) [; Fig. ];
acquiring audio data (Yu: e.g., expressly discloses / illustrates in one aspect wherein Mobile Phone receives ‘Audio Data 4’ captured / recorded by the Microphone of the Sound Box device) [Fig. 26] (e.g., After AudioPolicy in the Mobile phone determines the current audio recording policy, still as shown in FIG. 26, AudioPolicy may output the audio recording policy to AudioFlinger, and AudioFlinger may invoke the DMSDP audio HAL 2601 to receive the ‘audio data’ recorded by the Sound Box. For example, still as shown in FIG. 26, the Sound box may record Audio data 4 by using a hardware device such as a Microphone at the hardware layer, and output, by using an audio HAL, the recorded audio data 4 to AudioFlinger in the sound box for processing, for example, processing the audio data 4 by using the 3A algorithm. Further, AudioFlinger may output ‘processed Audio data 4’ to an audio recorder (AudioRecord) in the Sound box, AudioRecord reports the recorded Audio data 4 to a proxy app installed in the sound box, and finally the Proxy App sends the ‘recorded Audio data 4’ to the Mobile Phone. After receiving the Audio data 4 sent by the Sound box, the Mobile phone may output the Audio data 4 to AudioFlinger in the Mobile phone by using the DMSDP audio HAL 2601) [0623; Fig. 26] by mixing first audio data captured by the first electronic device and second audio data captured by the at least one second electronic device (Yu: e.g., When the second device plays the first audio data, the first device obtains to-be-played second audio data (the second audio data is of a second type), that is, to-be-played audio data includes both the first audio data and the second audio data. Further, the first device may determine, based on the second type and the device selection policy, whether the second audio data is allowed to be played by the second device. If the second audio data is allowed to be played by the second device, the first device may mix the first audio data and the second audio data, and then send mixed audio data to the second device for play ) [0100; Figs. 1, 28a-c & 47] (e.g., According to an eighth aspect, this application provides an audio control method, including: After a first device establishes a network connection to a second device, the first device may obtain a mixing policy corresponding to the second device by using the second device as a slave device. Subsequently, when the first device obtains to-be-played first audio data (the first audio data is of a first type) and second audio data (the second audio data is of a second type), it indicates that a plurality of channels of audio data currently need to be simultaneously played. In this case, the first device may determine, based on the first type, the second type, and the ‘mixing policy’, whether ‘the first audio data and the second audio data need to be mixed’…Correspondingly, if the first audio data and the second audio data need to be mixed, the first device may ‘mix’ the first audio data and the second audio data into ‘third audio data’. Further, the first device may send the ‘mixed third audio data’ to the second device for play ) [0103, 0114, 0129-0130; Figs. 1, 28a-c & 47 ];
generating target audio data based on the acquired audio data (Yu: e.g., After receiving the audio data 4 sent by the sound box, the mobile phone may output the audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601. In this case, AudioFlinger may ‘process’ the Audio data_4 according to the audio recording policy that is output by AudioPolicy, so that ‘processed Audio data_4’ {target audio data} matches an audio recording capability of the sound box. Subsequently, AudioFlinger may ‘output’ the ‘Processed Audio Data_4 to the Communication app {of the Mobile phone} by using the Audio recorder (AudioRecord). In addition, {target} ‘Audio data generated’ when the Communication app runs may also be switched to the sound box according to the audio switching method in the foregoing embodiment for play. In this way, the mobile phone may flexibly switch the audio input/output function of the mobile phone to a slave device based on an audio capability of another slave device, so as to implement a cross-device distributed audio architecture ) [0623; Fig. 26]; and providing the target audio data to a target application (Yu: e.g., After receiving the audio data 4 sent by the sound box, the mobile phone may output the audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601. In this case, AudioFlinger may ‘process’ the Audio data_4 according to the audio recording policy that is output by AudioPolicy, so that ‘processed Audio data_4’ {target audio data} matches an audio recording capability of the sound box. Subsequently, AudioFlinger may ‘output’ the ‘Processed Audio Data_4 to the ‘Communication app’ {Target application } by using the Audio recorder (AudioRecord). In addition, {target} ‘Audio data generated’ when the Communication app runs may also be switched to the sound box according to the audio switching method in the foregoing embodiment for play. In this way, the mobile phone may flexibly switch the audio input/output function of the mobile phone to a slave device based on an audio capability of another slave device, so as to implement a cross-device distributed audio architecture ) [0623; Fig. 26].
But while Yu discloses substantial features of the invention above, he does not expressly disclose the additional recited feature of capturing first audio data by the first electronic device; receiving second audio data captured by the at least one second electronic device; and acquiring audio data by mixing the first audio data captured by the first electronic device and second audio data captured by the at least one second electronic device.
However, in a related endeavor, Zhang particularly discloses the additional recited feature(s) of the method further comprising the step of capturing first audio data by the first electronic device; receiving second audio data captured by the at least one second electronic device; and acquiring audio data by mixing the first audio data captured by the first electronic device and second audio data captured by the at least one second electronic device (Zhang: e.g., Referring to FIGs 7a-b, Zhang discloses an exemplary embodiment wherein a User owns or has a 1st Terminal device {i.e., Tablet 101} coupled to a ‘wired audio output device’ {i.e., earphones} and a 2nd Terminal device {i.e., Mobile phone 102), and the Tablet device 101 is currently outputting an ‘audio signal’ of a video service {i.e. video, music, radio} that the User is listening to {first audio data captured by the first electronic device}. With reference to Fig. 7B, Zhang discloses / teaches that Mobile phone 102 receives an ‘incoming call request’ from a Contact/person {‘George’} and outputs an ‘audio output request’ / signaling corresponding to the ‘call request’ to the Tablet device 101 over the network {second audio data captured by the Second electronic device}, and the received signaling / ‘call request’ is converted in to a ‘second audio signal’ / ‘voice content’ {i.e., “George is calling, and do you Answer?”} by Tablet 101, and then ‘mixes’ the second audio {‘converted’ captured first audio data of Mobile phone 102 } into an audio signal of the currently output video service {captured second audio data / video service data of the Tablet 101} [0031-014, Figs. 7a-b].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify and/or combine Yu’s invention with the above said additional feature, as expressly disclosed by Zhang, for the motivation of providing an audio output method and a terminal device. The method helps to reduce the operation complexity of a user, improve a degree of intelligence of the terminal device, and finally improve the user experience [Zhang: Abstract, 0003-0005; Fig. 1, 6 & 7a-b].
Claim(s) 9, 10, recite(s) substantially the same limitations / features as claim 1, is/are distinguishable only by its/their statutory category (Apparatus, CRSM), and accordingly rejected on the same basis.
As per claim{s} 4, Yu discloses the method further comprises: sending audio playback data of the target application to the at least one second electronic device, and controlling an audio playback apparatus of the at least one second electronic device to play the audio playback data (Yu: e.g., After receiving the Audio data 4 sent by the Sound box, the mobile phone may output the audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601. In this case, AudioFlinger may ‘process’ the Audio data_4 according to the audio recording policy that is output by AudioPolicy, so that ‘processed Audio data_4’ {target audio data} matches an audio recording capability of the sound box. Subsequently, AudioFlinger may ‘output’ the ‘Processed Audio Data_4 to the ‘Communication app’ {Target application } by using the Audio recorder (AudioRecord). In addition, {target} ‘Audio data generated’ when the Communication app runs may also be ‘switched’ to the Sound box according to the ‘audio switching’ method in the foregoing embodiment for play. In this way, the Mobile phone may ‘flexibly switch’ the ‘audio input / output’ function of the Mobile phone to a ‘Slave device’ {i.e., the Sound box} based on an audio capability of another slave device, so as to implement a ‘Cross-device Distributed Audio architecture’) [0623; Fig. 26].
As per claim{s} 7, Yu discloses the method wherein after providing the target audio data to the target application, the data processing method further comprises: sending the target audio data external to the first electronic device via the target application, or saving the target audio data in the first electronic device via the target application (Yu: e.g., After receiving the Audio data 4 sent by the Sound box, the Mobile phone may output the Audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601. In this case, AudioFlinger may ‘process’ the Audio data_4 according to the audio recording policy that is output by AudioPolicy, so that ‘processed Audio data_4’ {target audio data} matches an audio recording capability of the sound box. Subsequently, AudioFlinger may ‘output’ the ‘Processed Audio Data_4 to the ‘Communication app’ {Target application } by using the Audio recorder (AudioRecord). In addition, {target} ‘Audio data generated’ when the Communication app runs may also be ‘switched’ to the Sound box according to the ‘audio switching’ method in the foregoing embodiment for play. In this way, the Mobile phone may ‘flexibly switch’ the ‘audio input / output’ function of the Mobile phone to a ‘Slave device’ {i.e., the Sound box} based on an audio capability of another slave device, so as to implement a ‘Cross-device Distributed Audio architecture’) [0623; Fig. 26].
As per claim{s} 8, Yu discloses the method further comprising
establishing a communication connection with the at least one second electronic device, or establishing a communication connection with the one or more candidate electronic devices and wherein the acquiring of the audio data captured by the at least one second electronic device comprises: acquiring, via the communication connection, the audio data captured by the at least one second electronic device (Yu: e.g., In addition, still as shown in FIG. 26, the Mobile phone may establish a network connection to the Sound box and obtain an audio capability parameter of the sound box, where the audio capability parameter may include the play parameter and the recording parameter shown in Table 2. Further, the DV app may create a DMSDP audio HAL 2601 at the HAL based on the audio capability parameter. The created DMSDP audio HAL 2601 may send audio data to the Sound box, and may also ‘receive’ audio data recorded by the Sound box) [0619; Fig. 26],
(e.g., After AudioPolicy in the Mobile phone determines the current audio recording policy, still as shown in FIG. 26, AudioPolicy may output the audio recording policy to AudioFlinger, and AudioFlinger may invoke the DMSDP audio HAL 2601 to receive the ‘audio data’ recorded by the Sound Box. For example, still as shown in FIG. 26, the Sound box may record Audio data 4 by using a hardware device such as a Microphone at the hardware layer, and output, by using an audio HAL, the recorded audio data 4 to AudioFlinger in the sound box for processing, for example, processing the audio data 4 by using the 3A algorithm. Further, AudioFlinger may output ‘processed Audio data 4’ to an audio recorder (AudioRecord) in the Sound box, AudioRecord reports the recorded Audio data 4 to a proxy app installed in the sound box, and finally the Proxy App sends the ‘recorded Audio data 4’ to the Mobile Phone. After receiving the audio data 4 sent by the sound box, the mobile phone may output the audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601) [0623; Fig. 26]
Claim(s) 3, 5, 6 is/are rejected under 35 U.S.C. 103 as being disclosed by Yu in view of Zhang and in further view of Seo et al (hereinafter Seo), US Patent Pub 20170147282 A1 (publication date May 2017).
As per claim{s} 3, Yu in view Zhang discloses substantial features of the invention above, including the recited feature of transmitting the target audio data to a data transferring hardware abstraction layer of the first electronic device (Yu: e.g., In a possible implementation, a HAL of the first device may include a ‘first hardware abstraction module’ configured to transmit audio data, for example, a Wi-Fi HAL) [0196] -- but does not expressly disclose the additional recited feature of controlling the target application to read the target audio data from the data transferring hardware abstraction layer.
However, in a related endeavor, Seo particularly discloses the additional recited feature(s) of the method further comprising the step of controlling the target application to read the target audio data from the data transferring hardware abstraction layer (Seo: e.g., Referring to FIG. 10A, the audio processing engine 220c may receive the source audio data D_SRC. For example, an audio player that is executed in the host CPU 100c is an application program and may ‘extract information’ about the location where the Source audio data D_SRC has been ‘stored’ in the memory subsystem 400, in response to a user's input received from the peripherals 300 of FIG. 1. The audio processing engine 220c may receive information about the location of the source audio data D_SRC from the host CPU 100c, and thus may ‘read’ the Source Audio data D_SRC and receive the ‘read Source Audio data D SRC’) [0092, Fig. 10a].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination of Yu and Zhang with the above said additional feature, as expressly disclosed by Seo, for the motivation of providing an audio processing method and system for determining whether it is possible for an Audio Processing Engine to perform a first process for first audio data, based on a runtime of the first process for the first audio data, and performing the first process or requests for the CPU host to perform the first process based on the result of the determination [Seo: Abstract, 0002; Fig. 1].
As per claim{s} 5, Yu in view Zhang in view of Seo discloses the method wherein the sending of the audio playback data of the target application to the at least one second electronic device comprises:
controlling the target application to transmit the audio playback data to a data transferring hardware abstraction layer of the first electronic device Yu: e.g., In a possible implementation, a HAL {hardware abstraction layer} of the first device may include a ‘first hardware abstraction module’ configured to transmit audio data, for example, a Wi-Fi HAL) [0196];
reading the audio playback data from the data transferring hardware abstraction layer (Seo: e.g., Referring to FIG. 10A, the audio processing engine 220c may receive the source audio data D_SRC. For example, an audio player that is executed in the host CPU 100c is an application program and may ‘extract information’ about the location where the Source audio data D_SRC has been ‘stored’ in the memory subsystem 400, in response to a user's input received from the peripherals 300 of FIG. 1. The audio processing engine 220c may receive information about the location of the source audio data D_SRC from the host CPU 100c, and thus may ‘read’ the Source Audio data D_SRC and receive the ‘read Source Audio data D SRC’) [0092, Fig. 10a]; and
sending the read audio playback data to the at least one second electronic device (Yu: e.g., After receiving the Audio data 4 sent by the Sound box, the Mobile phone may output the Audio data 4 to AudioFlinger in the mobile phone by using the DMSDP audio HAL 2601. In this case, AudioFlinger may ‘process’ the Audio data_4 according to the audio recording policy that is output by AudioPolicy, so that ‘processed Audio data_4’ {target audio data} matches an audio recording capability of the sound box. Subsequently, AudioFlinger may ‘output’ the ‘Processed Audio Data_4 to the ‘Communication app’ {Target application } by using the Audio recorder (AudioRecord). In addition, {target} ‘Audio data generated’ when the Communication app runs may also be ‘switched’ to the Sound box according to the ‘audio switching’ method in the foregoing embodiment for play. In this way, the Mobile phone may ‘flexibly switch’ the ‘audio input / output’ function of the Mobile phone to a ‘Slave device’ {i.e., the Sound box} based on an audio capability of another slave device, so as to implement a ‘Cross-device Distributed Audio architecture’) [0623; Fig. 26].
As per claim{s} 6, Yu in view Zhang in view of Seo, and Seo in particular, discloses the method further comprising transmitting the read audio playback data to an audio hardware abstraction layer of the first electronic device to play the audio playback data through an audio playback apparatus of the first electronic device (Seo: e.g., Referring to FIG. 10A, the audio processing engine 220c may receive the source audio data D_SRC. For example, an audio player that is executed in the host CPU 100c is an application program and may ‘extract information’ about the location where the Source audio data D_SRC has been ‘stored’ in the memory subsystem 400, in response to a user's input received from the peripherals 300 of FIG. 1. The audio processing engine 220c may receive information about the location of the source audio data D_SRC from the host CPU 100c, and thus may ‘read’ the Source Audio data D_SRC and receive the ‘read Source Audio data D SRC’) [0092, Fig. 10a] (e.g., The request_19 for performance {output / playback} of the sound effect process 14 may be transmitted from the Audio processing engine 220c to the Host CPU 100c by using various methods. For example, as shown in FIG. 10A, the audio processing engine 220c may generate an interrupt for the host CPU 100c. Alternatively, after the host CPU 100c directs the Audio processing engine 220c to ‘play back a sound’ from the Source audio data D_SRC, the host CPU 1000c may check through polling whether a request 19 for the performing of a process occurs from the Audio processing engine 220c….The request of the audio processing engine 220c, transmitted to the user space by the audio processing engine driver 41, may be processed by an audio Hardware Abstraction Layer (HAL) 31. The audio HAL 31 may be provided such that an application program (for example, a sound effect program 32) does not directly process a call and a response to the hardware of the kernel 40, that is, the audio processing engine 220c, so that the Application program (e.g., the sound effect program in this example) may be designed independently of the hardware and is efficiently designed ) [0094-0097; Fig. 10a] (e.g., When the sound effect process 14 is completed by the sound effect program 32, the Sound effect program 32 may transmit information about a result obtained by performing the sound effect process 14 to the Audio processing engine (APE) driver 41 of the kernel 40 through the audio HAL 31. For example, as shown in FIG. 10A, the audio HAL 31 may transmit the information about the result of performing the sound effect process 14 to the Audio processing engine driver 41) [0098; Fig. 10a].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GLENFORD J MADAMBA whose telephone number is (571)272-7989. The examiner can normally be reached on Mondays to Fridays from 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/GLENFORD J MADAMBA/Primary Examiner, Art Unit 2451