DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 11/28/2025 with respect to claim(s) 1-15, 19-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al (US 2015/0172593 A1) in view of Jun et al (WO 2019/056341 A1).
Regarding claim 1, Kim et al disclose a method for audio output mode switching, performed by a mobile phone (Kim et al; Para [0094]), wherein the mobile phone is in a wireless connection with a wireless earphone (Kim et al; Para [0007]), wherein the wireless earphone is external to the mobile phone (Kim et al; Para [0013]), the method comprising: displaying audio information on the mobile phone (Kim et al; Para [0024]; display video content for playback), and outputting audio corresponding to the audio information in a first audio output mode (Kim et al; Para [0054]-[0055]; [0025] communication mode; output audio at external device; display headset), wherein the first audio output mode is a mode in which the audio is outputted through the wireless earphone (Kim et al; Para [0013] [0068] output audio through Bluetooth headset when Bluetooth mode selected) wherein the second audio output mode is a mode in which the audio is outputted through the mobile phone (Kim et al; Para [0025]; basic mode in which audio playback at mobile device); but do not expressly disclose receiving a target input for the audio information on the wireless earphone; and switching an output mode of the audio into a second audio output mode in response to the target input. However, in the same field of endeavor, Jun et al disclose a method comprising receiving a target input for the audio information on the wireless earphone (Jun et al; Page 5; lines 20-40; click on earphone); and switching an output mode of the audio into a second audio output mode in response to the target input (Jun et al; Page 5; lines 20-40; click on earphone to trigger audio playback at smartphone). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the computing device audio playback control taught by Jun et al as audio output control in the method taught by Kim. The motivation to do so would have been to provide a simpler device (Jun et al et al; Page 1; lines 15-40).
Regarding claim 6, Kim et al disclose an electronic device, wherein the electronic device is mobile phone in a wireless connection with wireless earphone (Kim et al; Para [0007]), wherein the wireless earphone is external to the mobile phone (Kim et al; Para [0013]), the electronic device comprising: a memory storing a computer program; and a processor coupled to the memory and configured to execute the computer program to perform operations comprising (Kim et al; Para [0043]-[0045]): displaying audio information on the mobile phone (Kim et al; Para [0024]; display video content for playback), and outputting audio corresponding to the audio information in a first audio output mode (Kim et al; Para [0054]-[0055]; [0025] communication mode; output audio at external device; display headset), wherein the first audio output mode is a mode in which the audio is outputted through the wireless earphone (Kim et al; Para [0013] [0068] output audio through Bluetooth headset when Bluetooth mode selected); wherein the second audio output mode is a mode in which the audio is outputted through the mobile phone (Kim et al; Para [0025]; basic mode in which audio playback at mobile device), but do not expressly disclose receiving a target input on the wireless earphone; and switching an output mode of the audio into a second audio output mode in response to the target input. However, in the same field of endeavor, Jun et al disclose a method comprising receiving a target input for the audio information on the wireless earphone (Jun et al; Page 5; lines 20-40; click on earphone); and switching an output mode of the audio into a second audio output mode in response to the target input (Jun et al; Page 5; lines 20-40; click on earphone to trigger audio playback at smartphone). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the computing device audio playback control taught by Jun et al as audio output control in the method taught by Kim. The motivation to do so would have been to provide a simpler device (Jun et al et al; Page 1; lines 15-40).
Regarding claim 11, Kim et al disclose a non-transitory computer-readable storage medium, storing a computer program, when the computer program is executed by a processor (Kim et al; Para [0043]-[0045]) of mobile phone connected with a wireless earphone (Kim et al; Para [0007]), causes the processor to perform operations comprising: displaying audio information on the mobile phone (Kim et al; Para [0024]; display video content for playback), and outputting audio corresponding to the audio information in a first audio output mode (Kim et al; Para [0054]-[0055]; [0025] communication mode; output audio at external device; display headset), wherein the first audio output mode is a mode in which the audio is outputted through the wireless earphone (Kim et al; Para [0013] [0068] output audio through Bluetooth headset when Bluetooth mode selected); wherein the second audio output mode is a mode in which the audio is outputted through the mobile phone (Kim et al; Para [0025]; basic mode in which audio playback at mobile device); receiving a target input on the wireless earphone; and switching an output mode of the audio into a second audio output mode in response to the target input. However, in the same field of endeavor, Jun et al disclose a method comprising receiving a target input for the audio information on the wireless earphone (Jun et al; Page 5; lines 20-40; click on earphone); and switching an output mode of the audio into a second audio output mode in response to the target input (Jun et al; Page 5; lines 20-40; click on earphone to trigger audio playback at smartphone). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the computing device audio playback control taught by Jun et al as audio output control in the method taught by Kim. The motivation to do so would have been to provide a simpler device (Jun et al et al; Page 1; lines 15-40).
Claim(s) 2, 7, 12, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al (US 2015/0172593 A1) in view of Jun et al (WO 2019/056341 A1) and further in view of Qian et al (US 2016/0370983 A1).
Regarding claim 2, Kim et al in view of Jun et al disclose the method according to claim 1, but do not expressly disclose wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching the output mode of the audio into the second audio output mode in response to the target input; and displaying a first target control on the mobile phone after receiving the target input; wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode. However, in the same field of endeavor, Qian et al disclose a method wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching the output mode of the audio into the second audio output mode in response to the target input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; selecting audio output based on dragging the content logo from an output mode to another); and displaying a first target control on the mobile phone after receiving the target input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input); wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; audio output mode is switched based on dragging content logo from an output mode to another). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]).
Regarding claim 7, Kim et al in view of Jun et al disclose the electronic device according to claim 6, wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switch the output mode of the audio into the second audio output mode in response to the target input: and display a first target control after receiving the target input on the mobile phone, wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode. However, in the same field of endeavor, Qian et al disclose a device wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching the output mode of the audio into the second audio output mode in response to the target input on the mobile phone (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; selecting audio output based on dragging the content logo from an output mode to another); and displaying a first target control after receiving the target input (Qian et al; Fig 2; Para [0042]; [0063]- [0064]; content logo interpreted as first target control received dragging input); wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; audio output mode is switched based on dragging content logo from an output mode to another). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]).
Regarding claim 12, Kim et al in view of Jun et al disclose the non-transitory computer-readable storage medium according to claim 11, but do not expressly disclose wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching the output mode of the audio into the second audio output mode in response to the target input; and displaying a first target control on the mobile phone after receiving the target input, wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode. However, in the same field of endeavor, Qian et al disclose a device wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching the output mode of the audio into the second audio output mode in response to the target input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; selecting audio output based on dragging the content logo from an output mode to another); and displaying a first target control on the mobile phone after receiving the target input (Qian et al; Fig 2; Para [0042]; [0063]- [0064]; content logo interpreted as first target control received dragging input); wherein the first target control is used for intuitively indicating that the output mode of the audio is being switched from the first audio output mode into the second audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; audio output mode is switched based on dragging content logo from an output mode to another). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]).
Regarding claim 20, Kim et al in view of Jun et al and further in view of Qian disclose the method according to claim 2, but do not expressly disclose wherein the target control comprises a first icon intuitively indicating the first audio output mode, a second icon intuitively indicating the second audio output mode, and a graphic object indicating the first audio output mode is being switched into the second audio output mode. However, in the same field of endeavor, Qian et al disclose a method wherein the target control comprises a first icon intuitively indicating the first audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; speaker logo interpreted as icon indicating output audio mode), a second icon intuitively indicating the second audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; earphone logo interpreted as icon indicating output audio mode), and a graphic object indicating the first audio output mode is being switched into the second audio output mode (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; Music audio content icon is interpreted as graphic object). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]).
Claim(s) 3-4, 8-9, 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al (US 2015/0172593 A1) in view of Jun et al (WO 2019/056341 A1) and further in view of Qian et al (US 2016/0370983 A1) and further in view of Chakirov (US 2013/0286035 A1).
Regarding claim 3, Kim et al in view of Jun et al disclose the method according to claim 1, but do not expressly disclose wherein the switching an output mode of the audio into a second audio output modes in response to the target input comprises: switching, when the target input is a first swipe input, the output mode of the audio into the second audio output mode in response to the first swipe input: and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input, but do not expressly disclose wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input. However, in the same field of endeavor, Qian et al disclose a method wherein the switching an output mode of the audio into a second audio output modes in response to the target input comprises: switching, when the target input is a first swipe input, the output mode of the audio into the second audio output mode in response to the first swipe input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to headphone mode interpreted as first target control): and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to earphone mode interpreted in response to second swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]). Moreover, in the same field of endeavor, Chakirov discloses a method wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Regarding claim 4, Kim et al in view of Jun et al and further in view of Qian et al and further in view of Chakirov disclose the method according to claim 3, but do not expressly disclose wherein the target input is the second swipe input, and the receiving a target input on the wireless phone comprises: receiving the second swipe input for the audio information; and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode. However, in the same field of endeavor, Chakirov discloses a method wherein the target input is the second swipe input, and the receiving a target input on the wireless phone comprises: receiving the second swipe input on the wireless earphone (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe); and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode (Chakirov; Para [0136]-[0138]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Regarding claim 8, Kim et al in view of Jun et al disclose the electronic device according to claim 6, but do not expressly disclose wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching, when the target input is a first swipe input, the output mode of the audio into the second audio output mode in response to the first swipe input; and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input, wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input. However, in the same field of endeavor, Qian et al disclose a device wherein the switching an output mode of the audio into a second audio output modes in response to the target input comprises: switching, when the target input is a first swipe input, the output mode of the audio into the second audio output mode in response to the first swipe input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to headphone mode interpreted as first target control): and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to earphone mode interpreted in response to second swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]). Moreover, in the same field of endeavor, Chakirov discloses a device wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Regarding claim 9, Kim et al in view of Jun et al and further in view of Qian et al and further in view of Chakirov et al disclose the electronic device according to claim 8, but do not expressly disclose wherein the target input is the second swipe input, and the receiving a target input on the wireless earphone comprises: receiving the second swipe input on the wireless earphone; and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode. However, in the same field of endeavor, Chakirov discloses a device wherein the target input is the second swipe input, and the receiving a target input for the audio information comprises: receiving the second swipe input on the wireless earphone (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe); and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode (Chakirov; Para [0136]-[0138]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Regarding claim 13, Kim et al in view of Jun et al disclose the non-transitory computer-readable storage medium according to claim 11, but do not expressly disclose wherein the switching an output mode of the audio into a second audio output mode in response to the target input comprises: switching, when the target input is a first swipe input, the cutout mode of the audio into the second audio output mode in response to the first swipe input; and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input, wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input. However, in the same field of endeavor, Qian et al disclose a device wherein the switching an output mode of the audio into a second audio output modes in response to the target input comprises: switching, when the target input is a first swipe input, the output mode of the audio into the second audio output mode in response to the first swipe input (Qian et al; Fig 2; Para [0042]; [0063]-[0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to headphone mode interpreted as first target control): and after the receiving a target input for the audio information, the method further comprises: controlling, when the target input is a second swipe input, the output mode of the audio to be the first audio output mode in response to the second swipe input (Qian et al; Fig 2; Para [0042]; [0063]- [0064]; content logo interpreted as first target control received dragging input; icon representing switching audio to earphone mode interpreted in response to second swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Qian as audio output control in the method taught by Kim. The motivation to do so would have been to rapidly select or switch the output mode so as to output the target content (Qian et al; Para [0005]). Moreover, in the same field of endeavor, Chakirov discloses a device wherein a swipe trajectory of the second swipe input at least partially overlaps a swipe trajectory of the first swipe input (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Regarding claim 14, Kim et al in view of Jun et al and further in view of Qian and further in view of Chakirov disclose the non-transitory computer-readable storage medium according to claim 13, but do not expressly disclose wherein the target input is the second swipe input, and the receiving a target input on the wireless earphone comprises: receiving the second swipe input on the wireless earphone; and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode. Chakirov discloses a method wherein the target input is the second swipe input, and the receiving a target input for the audio information comprises: receiving the second swipe input on the wireless earphone (Chakirov; Para [0136]-[0138]; second swipe opposite the first swipe to reverse action of the first swipe); and displaying a second target control on the mobile phone, wherein the second target control is used for indicating that the output mode of the audio is the first audio output mode (Chakirov; Para [0136]-[0138]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the reverse swipe taught by Chakirov as second swipe in the method taught by Kim. The motivation to do so would have been to acquire additional information concerning their action (Chakirov; Para [0024]).
Claim(s) 5, 10, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al (US 2015/0172593 A1) in view of Jun et al (WO 2019/056341 A1) and further in view of Yin et al (WO 2021/129521 A1).
Regarding claim 5, Kim et al in view of Jun et al disclose the method according to claim 1, but do not expressly disclose wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on ether of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state. However, in the same field of endeavor, Yin et al disclose a method wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on ether of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state (Yin et al; Para [0353][0365]-[0366]; Fig 8; left or right earbud selected for audio rendering). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the left or right earphone selection taught by Yin as audio output control in the method taught by Kim. The motivation to do so would have been to make a call easily in various way (Yin et al; Para [0004]).
Regarding claim 10, Kim et al in view of Jun et al disclose the electronic device according to claim 6, but do not expressly disclose wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on either of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state. However, in the same field of endeavor, Yin et al disclose a device wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on either of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state (Yin et al; Para [0353][0365]-[0366]; Fig 8; left or right earbud selected for audio rendering). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the left or right earphone selection taught by Yin as audio output control in the method taught by Kim. The motivation to do so would have been to make a call easily in various way (Yin et al; Para [0004]).
Regarding claim 15, Kim et al in view of Jun et al disclose the non-transitory computer-readable storage medium according to claim 11, but do not expressly disclose wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on either of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state. However, in the same field of endeavor, Yin et al disclose a device wherein the receiving a target input on the wireless earphone comprises: receiving a target input acting on either of a left earphone or a right earphone of the wireless earphone, wherein at least one of the left earphone or the right earphone is in an unworn state (Yin et al; Para [0353][0365]-[0366]; Fig 8; left or right earbud selected for audio rendering). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the left or right earphone selection taught by Yin as audio output control in the method taught by Kim. The motivation to do so would have been to make a call easily in various way (Yin et al; Para [0004]).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al (US 2015/0172593 A1) in view of Jun et al (WO 2019/056341 A1) and further in view of Yin et al (WO 2021/129521 A1) and further in view of Yang et al (US 2007/0099568 A1).
Regarding claim 19, Kim et al in view of Jun et al and further in view of Yin et al disclose the method according to claim 5, but do not expressly disclose wherein the target input acting on either of a left earphone or right earphone of the wireless earphone is a double click on either of the left earphone or the right earphone. However, in the same field of endeavor, Yang et al disclose a device wherein the target input acting on either of a left earphone or right earphone of the wireless earphone is a double click on either of the left earphone or the right earphone (Yang et al; Para [0029]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio output selection taught by Yang as audio output control in the method taught by Kim. The motivation to do so would have been to making the method more convenient to use (Yang et al; Para [0007]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUASSI A GANMAVO whose telephone number is (571)270-5761. The examiner can normally be reached M-F 9 AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 5712707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KUASSI A GANMAVO/Examiner, Art Unit 2692
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692