DETAILED ACTION
1. Applicant's amendments and remarks submitted on October 15, 2025 have been entered. Claims 1, 5-6, 9, 12-14 and 17 have been amended. Claims 1-20 are still pending on this application, with claims 1-20 being rejected. All new grounds of rejection were necessitated by the amendments to claims 1, 9 and 17. Accordingly, this action is made final.
2. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim Rejections - 35 USC § 103
3. Claim(s) 1-3, 5, 8-11, 13, 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Pub No 2013/0279724 A1 to Stafford et al. (“Stafford”) in view of US Patent Pub No 2021/0377648 A1 to Tome et al. (“Tome”).
As to claim 1, Stafford discloses an electronic device, comprising: at least one memory; and at least one processor coupled with the at least one memory (see figure 3A; pg. 3, ¶ 0029 - ¶ 0030) configured to cause the electronic device to: identify a first earphone of a pair of earphones connected to the electronic device, the pair of earphones including the first earphone and a second earphone (see figures 1-2A; pg. 1, ¶ 0015; pg. 3, ¶ 0030).
Stafford discloses automatic detection to covert and provide stereo or mono audio content to the earphones based on their wearing state or if different users are currently listening (see figures 1-2A; Abstract; pg. 2, ¶ 0018 - ¶ 0019), and further wherein the orientation detector can be implemented without sensors by using existing standard components of the headset, or in conjunction with a user input device, e.g. microphone or by tapping the earpiece (see pg. 2, ¶ 0023 - ¶ 0024). However it does not expressly disclose transmitting stereo audio content to the first earphone and the second earphone while monitoring for a voice command or a gesture indicating that the first earphone is used by a first user and that the second earphone is not used by the first user; converting the stereo audio content to mono audio content responsive to detecting the voice command or the gesture; and transmitting the mono audio content to the first earphone.
Tome discloses a similar system, and further discloses the earbuds configured to operate in first and second modes corresponding to stereophonic and monophonic audio output (see pgs. 2-3, ¶ 0030 - ¶ 0031; pg. 11, ¶ 0102), wherein the earbuds are configured to receive voice commands and/or gesture inputs via microphones or touch sensors while operating in the first mode corresponding to stereo output to control earbud functions (see pg. 2, ¶ 0027 - ¶ 0028; pg. 4, ¶ 0044; pg. 7, ¶ 0067 - ¶ 0069; pg. 8, ¶ 0079).
Stafford and Tome are analogous art because they are both drawn to earphone devices.
It would have been obvious before the effective filing date of the claimed invention to incorporate the use of gesture inputs and/or voice commands when in a first stereo mode, as taught by Tome, in the device as taught by Stafford. The motivation being to provide information on the use state of the earpieces via standard user input components in the headset, as already taught by Stafford, and further as user inputs such as gestures and voice commands are known in the art, and their usage can provide different input settings based on the wearing states of the earpieces (Tome pg. 2, ¶ 0028; pg. 7, ¶ 0067; pg. 8, ¶ 0079).
As to claim 2, Stafford in view of Tome further discloses wherein the at least one processor is configured to cause the electronic device to: detect that the second earphone is used by a second user and the first earphone is used by the first user; and transmit the mono audio content to the second earphone (Stafford figure 2A; pg. 2, ¶ 0019; Tome pg. 11, ¶ 0102).
As to claim 3, Stafford in view of Tome further discloses wherein the detection that the second earphone is used by the second user and the first earphone is used by the first user is based on the at least one processor causing the electronic device to: analyze a signal from an inertial measurement sensor of the first earphone and a signal from an inertial measurement sensor of the second earphone; and determine, based on the analysis, that the first earphone and the second earphone are worn by different users (Stafford pg. 5, ¶ 0046; Tome figures 1A-2; pg. 3, ¶ 0036; pg. 4, ¶ 0046; pg. 6, ¶ 0060).
As to claim 5, Stafford in view of Tome further discloses wherein the at least one processor is configured to cause the electronic device to: receive a proximity indication from a proximity sensor of the first earphone; and detect the gesture based on the proximity indication (Tome figures 6A-6B; pg. 7, ¶ 0068).
As to claim 8, Stafford in view of Tome further discloses wherein the pair of earphones include at least one of an in-ear wireless headphone, a wireless earbud, or a true wireless earbud (Stafford pg. 1, ¶ 0002; pg. 5, ¶ 0046).
As to claim 9, Stafford discloses a method, comprising: identifying a first earphone of a pair of earphones connected to an electronic device, the pair of earphones including the first earphone and a second earphone (see figures 1-2A and 3A; pg. 1, ¶ 0015; pg. 3, ¶ 0029 - ¶ 0030); converting stereo audio content to mono audio content in response to a detection that the first earphone is used by a first user and that the second earphone is not used by the first user; and transmitting the mono audio content to the first earphone (see figures 1-2A; pg. 2, ¶ 0018 - ¶ 0019).
Stafford discloses automatic detection to covert and provide stereo or mono audio content to the earphones based on their wearing state or if different users are currently listening (see figures 1-2A; Abstract; pg. 2, ¶ 0018 - ¶ 0019), and further wherein the orientation detector can be implemented without sensors by using existing standard components of the headset, or in conjunction with a user input device, e.g. microphone or by tapping the earpiece (see pg. 2, ¶ 0023 - ¶ 0024). However it does not expressly disclose converting stereo audio content to mono audio content in response to a detection of a voice command or a gesture indicating that the first earphone is used by a first user and that the second earphone is not used by the first user while transmitting the stereo audio content to the first earphone and the second earphone.
Tome discloses a similar system, and further discloses the earbuds configured to operate in first and second modes corresponding to stereophonic and monophonic audio output (see pgs. 2-3, ¶ 0030 - ¶ 0031; pg. 11, ¶ 0102), wherein the earbuds are configured to receive voice commands and/or gesture inputs via microphones or touch sensors while operating in the first mode corresponding to stereo output to control earbud functions (see pg. 2, ¶ 0027 - ¶ 0028; pg. 4, ¶ 0044; pg. 7, ¶ 0067 - ¶ 0069; pg. 8, ¶ 0079).
It would have been obvious before the effective filing date of the claimed invention to incorporate the use of gesture inputs and/or voice commands when in a first stereo mode, as taught by Tome, in the method as taught by Stafford. The motivation being to provide information on the use state of the earpieces via standard user input components in the headset, as already taught by Stafford, and further as user inputs such as gestures and voice commands are known in the art, and their usage can provide different input settings based on the wearing states of the earpieces (Tome pg. 2, ¶ 0028; pg. 7, ¶ 0067; pg. 8, ¶ 0079).
As to claim 10, Stafford in view of Tome further discloses further comprising: detecting that the second earphone is used by a second user and the first earphone is used by the first user; and transmitting the mono audio content to the second earphone (Stafford figure 2A; pg. 2, ¶ 0019).
As to claim 11, Stafford in view of Tome further discloses wherein the detection that the second earphone is used by the second user and the first earphone is used by the first user is based on: analyzing a signal from an inertial measurement sensor of the first earphone and a signal from an inertial measurement sensor of the second earphone; and determining, based on the analysis, that the first earphone and the second earphone are worn by different users (Stafford pg. 5, ¶ 0046; Tome figures 1A-2; pg. 3, ¶ 0036; pg. 4, ¶ 0046; pg. 6, ¶ 0060).
As to claim 13, Stafford in view of Tome further discloses further comprising: receiving a proximity indication from a proximity sensor of the first earphone; and detecting the gesture based on the proximity indication (Tome figures 6A-6B; pg. 7, ¶ 0068).
As to claim 16, Stafford in view of Tome further discloses wherein the pair of earphones include at least one of an in-ear wireless headphone, a wireless earbud, or a true wireless earbud (Stafford pg. 1, ¶ 0002; pg. 5, ¶ 0046).
As to claim 17, Stafford discloses a system, comprising: a communication interface to wirelessly link an electronic device to earphones that include a first earphone and a second earphone (see figure 3A; pg. 1, ¶ 0002; pg. 3, ¶ 0029 - ¶ 0030, ¶ 0034); and an audio controller configured to convert stereo audio content to mono audio content, the audio controller implemented at least partially in computer hardware (see pg. 4, ¶ 0037) to: identify the first earphone connected to the electronic device (see figures 1-2A; pg. 1, ¶ 0015; pg. 3, ¶ 0030); convert the stereo audio content to the mono audio content in response to a detection that the first earphone is used by a first user and that the second earphone is not used by the first user; and transmit the mono audio content to the first earphone (see figures 1-2A; pg. 2, ¶ 0018 - ¶ 0019).
Stafford discloses automatic detection to covert and provide stereo or mono audio content to the earphones based on their wearing state or if different users are currently listening (see figures 1-2A; Abstract; pg. 2, ¶ 0018 - ¶ 0019), and further wherein the orientation detector can be implemented without sensors by using existing standard components of the headset, or in conjunction with a user input device, e.g. a microphone or by tapping the earpiece (see pg. 2, ¶ 0023 - ¶ 0024). However it does not expressly disclose converting the stereo audio content to the mono audio content in response to a detection of a voice command or a gesture indicating that the first earphone is used by a first user and that the second earphone is not used by the first user while transmitting the stereo audio content to the first earphone and the second earphone.
Tome discloses a similar system, and further discloses the earbuds configured to operate in first and second modes corresponding to stereophonic and monophonic audio output (see pgs. 2-3, ¶ 0030 - ¶ 0031; pg. 11, ¶ 0102), wherein the earbuds are configured to receive voice commands and/or gesture inputs via microphones or touch sensors while operating in the first mode corresponding to stereo output to control earbud functions (see pg. 2, ¶ 0027 - ¶ 0028; pg. 4, ¶ 0044; pg. 7, ¶ 0067 - ¶ 0069; pg. 8, ¶ 0079).
It would have been obvious before the effective filing date of the claimed invention to incorporate the use of gesture inputs and/or voice commands when in a first stereo mode, as taught by Tome, in the system as taught by Stafford. The motivation being to provide information on the use state of the earpieces via standard user input components in the headset, as already taught by Stafford, and further as user inputs such as gestures and voice commands are known in the art, and their usage can provide different input settings based on the wearing states of the earpieces (Tome pg. 2, ¶ 0028; pg. 7, ¶ 0067; pg. 8, ¶ 0079).
As to claim 18, Stafford in view of Tome further discloses wherein the audio controller causes the computer hardware to: detect that the second earphone is used by a second user and the first earphone is used by the first user; and transmit the mono audio content to the second earphone (Stafford figure 2A; pg. 2, ¶ 0019).
As to claim 19, Stafford in view of Tome further discloses wherein the detection that the second earphone is used by the second user and the first earphone is used by the first user is based on the audio controller causing the computer hardware to: analyze a signal from an inertial measurement sensor of the first earphone and a signal from an inertial measurement sensor of the second earphone; and determine, based on the analysis, that the first earphone and the second earphone are worn by different users (Stafford pg. 5, ¶ 0046; Tome figures 1A-2; pg. 3, ¶ 0036; pg. 4, ¶ 0046; pg. 6, ¶ 0060).
4. Claim(s) 4, 6, 12, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stafford in view of Tome, and further in view of US Patent Pub No 2010/0310087 A1 to Ishida.
As to claim 4, Stafford in view of Tome discloses the electronic device of claim 1.
Stafford in view of Tome does not expressly disclose wherein the at least one processor is configured to cause the electronic device to: detect that the first earphone and the second earphone are used by the first user; stop the conversion of the stereo audio content to the mono audio content; and transmit a first channel of the stereo audio content to the first earphone and a second channel of the stereo audio content to the second earphone. However such a configuration is known in the art, as taught by Ishida, which discloses a similar system, and further discloses the process including a determination of both earpieces being worn after the initial determination that only one earpiece was being worn, and once detected, switching from the synthesized monaural signal to a stereo signal that can be output through both earpieces (see figure 9; pgs. 7-8, ¶ 0128 - ¶ 0134). Stopping the conversion of stereo audio content to mono audio content when both earpieces are in use by the first user is therefore considered obvious before the effective filing date of the claimed invention. The motivation being to provide a smooth transition for the user in instances where one earpiece is dropped or misplaced and later repositioned in place, allowing the user to hear the entire audio data even if one of the earpieces is accidentally displaced (Ishida pg. 7, ¶ 0125; pg. 8, ¶ 0136).
As to claim 6, Stafford in view of Tome and Ishida does not expressly disclose wherein the conversion of the stereo audio content to the mono audio content is based on the at least one processor being configured to cause the electronic device to: detect content of a first channel of the stereo audio content that is absent on a second channel of the stereo audio content; and add the content of the first channel to the second channel. However it does disclose the entire audio data being presented to the user through one ear when the audio is synthesized to be heard as monaural sound (Ishida pg. 3, ¶ 0051; pg. 7, ¶ 0128). Detecting missing stereo audio content and adding it to a channel is therefore considered obvious before the effective filing date of the claimed invention, the motivation being to synthesize a monaural audio signal that includes the entire stereo audio data and enables said audio data to be heard through one ear, as taught by Stafford in view of Tome and Ishida (Ishida pg. 3, ¶ 0051; pg. 7, ¶ 0128).
As to claim 12, Stafford in view of Tome and Ishida further discloses further comprising: detecting that the first earphone and the second earphone are used by the first user; stopping the conversion of the stereo audio content to the mono audio content; and transmitting a first channel of the stereo audio content to the first earphone and a second channel of the stereo audio content to the second earphone (Ishida figure 9; pgs. 7-8, ¶ 0128 - ¶ 0134).
As to claim 14, Stafford in view of Tome and Ishida does not expressly disclose wherein the conversion of the stereo audio content to the mono audio content is based on: detecting content of a first channel of the stereo audio content that is absent on a second channel of the stereo audio content; and adding the content of the first channel to the second channel. However it does disclose the entire audio data being presented to the user through one ear when the audio is synthesized to be heard as monaural sound (Ishida pg. 3, ¶ 0051; pg. 7, ¶ 0128). Detecting missing stereo audio content and adding it to a channel is therefore considered obvious before the effective filing date of the claimed invention, the motivation being to synthesize a monaural audio signal that includes the entire stereo audio data and enables said audio data to be heard through one ear, as taught by Stafford in view of Tome and Ishida (Ishida pg. 3, ¶ 0051; pg. 7, ¶ 0128).
As to claim 20, Stafford in view of Tome and Ishida further discloses wherein the audio controller causes the computer hardware to: detect that the first earphone and the second earphone are used by the first user; stop the conversion of the stereo audio content to the mono audio content; and transmit a first channel of the stereo audio content to the first earphone and a second channel of the stereo audio content to the second earphone (Ishida figure 9; pgs. 7-8, ¶ 0128 - ¶ 0134).
5. Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stafford in view of Tome, and further in view of US Patent Pub No 2005/0129248 A1 to Kraemer et al. (“Kraemer”).
As to claim 7, Stafford in view of Tome disclose the electronic device of claim 1.
Stafford in view of Tome does not disclose wherein the at least one processor is configured to cause the electronic device to: detect destructive interference based on a first channel of the stereo audio content being combined with a second channel of the stereo audio content; and before conversion of the stereo audio content to the mono audio content, perform signal processing on at least one of the first channel or the second channel to remove the destructive interference. However removing destructive interference before conversion to mono content is known in the art, as taught by Kraemer, which discloses a similar system for converting stereo to mono content, and further discloses the stereo input signals can be processed or adjusted prior to mixing to prevent cancellation when producing a single monophonic output (see pg. 1, ¶ 0007; pg. 2, ¶ 0041; pg. 5, ¶ 0078; pg. 8, ¶ 0115). The proposed modification is therefore considered obvious before the effective filing date of the claimed invention, the motivation being to prevent loss of information when combining stereophonic information and producing an enhanced monophonic output that preserves original audio fidelity (Kraemer pg. 1, ¶ 0007; pg. 2, ¶ 0041; pg. 5, ¶ 0078).
As to claim 15, Stafford in view of Tome and Kraemer further discloses further comprising: detecting destructive interference based on a first channel of the stereo audio content being combined with a second channel of the stereo audio content; and before conversion of the stereo audio content to the mono audio content, performing signal processing on at least one of the first channel or the second channel to remove the destructive interference (Kraemer pg. 1, ¶ 0007; pg. 2, ¶ 0041; pg. 5, ¶ 0078; pg. 8, ¶ 0115).
Response to Arguments
6. Applicant’s arguments with respect to claim(s) 1, 9 and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SABRINA DIAZ whose telephone number is (571)272-1621. The examiner can normally be reached Monday-Friday 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 5712727488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SABRINA DIAZ/Examiner, Art Unit 2693
/AHMAD F. MATAR/Supervisory Patent Examiner, Art Unit 2693