DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 30-88 has been cancelled.
Claim Objections
Claims 22-24 are not properly numbered. Limitations of Claim 23 and 24 should be included in Claim 22. Since Claims 22-24 are incomplete, these claims are not examined. Appropriate correction is required.
Claims 25-31 are objected to because of the following informalities: Claims 25-31 are not numbered properly. Claims 25-31 should be renumbered as 23-29. Appropriate correction is required.
Claims 30 and 31 are objected to because of the following informalities: Claims 30-88 have been cancelled. So, claims 30 and 31 should be renumbered. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 31 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of a claims drawn to a computer readable medium (also called machine-readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, in particular when the specification is silent or open-ended (see applicant's own disclosure ¶0009: “Executable instructions for performing these functions are, optionally, included in a non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors”). See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. §101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer-readable media that cover signals per se. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 US.C. § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer-readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC. § 101 by adding the limitation "non-transitory" to the claim.
The specification or claims must be amended to limit the computer-readable storage medium to only non-transitory signals, and state the exclusion of transitory signals (See Official Gazette Notice 1351 OG 212, dated February 23, 2010).
Claim 31 recites “A computer readable storage medium storing one or more programs …”. However, the applicant's own disclosure is silent about “a non-transitory computer readable media” and discloses “optionally, included in a non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors”. The Examiner suggests amending the disclosure to state "A non-transitory computer-readable storage medium …" to overcome the rejection under 35 U.S.C. §101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-8, 10-11, 28, and 30-31 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Voigt et al. (US Patent #9949021).
Regarding Claim 1, Voigt discloses a method at a wearable audio output device that includes a microphone (col. 2, lines 62-64: wearable audio device, microphones located on each side of the user’s head) and one or more input devices (col. 4, lines 42-51 discloses accelerometers located in the ears):
detecting an input via the one or more input devices (Voigt col. 4, lines 42-51 discloses accelerometers can be present for other features such as biometric sensing, user interface, or on/off head detection, can be used to detect relative movement of the user's head. Based on an assumption that the user is looking ahead most of the time, rotations to one side or the other can be interpreted as the user looking to the side. More pronounced head motions, such as tilting as well as rotating to the side, could also be used to increase confidence in the detection. Once the user's look direction is detected, all the same interactions as described above can be used);
in response to detecting the input, in accordance with a determination that the input is a first type of input, adjusting a mute state of the microphone for a first audio function that uses the microphone without adjusting the mute state of the microphone for a second audio function that uses the microphone (Voigt col. 3, lines 47-61 discloses a generalized logic flow is shown in Fig. 5. In this example, incoming audio is received 602, and the left and right microphone signals are compared 604 to determine where the user is facing when speaking. Based on which direction the user is facing, the system responds 606 in one of three ways. If the user is facing to the left, the mic output to the call [associated with the center channel] is muted 608, the call audio is ducked 610, meaning it is decreased in level or muted, and the microphone output is routed 612 to the user's VPA. If the user is facing to the center, the microphone signals are routed 614 to the first call. If the user is facing to the right, the mic output to the call is muted 616, the call audio is ducked 618, and the microphone signals are routed 620 to a second call); and
in response to detecting the input, in accordance with a determination that the input is not the first type of input, forgoing adjusting the mute state of the microphone for the first audio function that uses the microphone (Voigt col. 3, lines 53-58 discloses if the user is facing to the left, the mic output to the call [associated with the center channel] is muted 608, the call audio is ducked 610, meaning it is decreased in level or muted, and the microphone output is routed 612 to the user's VPA. If the user is facing to the center, the microphone signals are routed 614 to the first call. Refer to Fig. 5).
Claims 30 and 31 are rejected for the same reasons as set forth in Claim 1.
Regarding Claim 2, Voigt discloses the method of claim 1,
wherein the first audio function corresponds to detecting audio with one or more microphones at the wearable audio output device and enabling the detected audio to be used to provide input to a real-time communication session (Voigt col. 3, lines 57-58 discloses if the user is facing to the center, the microphone signals are routed 614 to the first call. Refer to Fig. 5).
Regarding Claim 3, Voigt discloses the method of claim 1,
wherein the second audio function corresponds to detecting audio with one or more microphones at the wearable audio output device and enabling the detected audio to be used to generate instructions for a digital assistant (Voigt col. 2, line 62 to col. 3, line 19 discloses by comparing the user's voice as received at microphones on each side, different product features may be activated based on whether the user is speaking while looking ahead or to one side or the other, or on changes to where the user is looking between utterances. For example, if the user is on a call, and speaking while looking ahead, but then turns to their side and asks a question, the VUI [Voice User Interface] can mute the outbound call audio and provide the user's side question to a virtual personal assistant [VPA], which then provides the answer to the user over a local audio channel, not audible to the other people on the call. In another example, the user may have two calls active at the same time [e.g., a conference call with a client and a side call to a colleague], and route audio to one or the other based on whether they speak straight ahead or to the side, and send commands and questions to their voice personal assistant by speaking to the other side).
Regarding Claim 4, Voigt discloses the method of claim 1,
wherein the second audio function corresponds to detecting audio with one or more microphones at the wearable audio output device and enabling the detected audio to be used to generate content (Voigt col. 2, line 62 to col. 3, line 8 discloses by comparing the user's voice as received at microphones on each side, different product features may be activated based on whether the user is speaking while looking ahead or to one side or the other, or on changes to where the user is looking between utterances. For example, if the user is on a call, and speaking while looking ahead, but then turns to their side and asks a question, the VUI [Voice User Interface] can mute the outbound call audio and provide the user's side question to a virtual personal assistant [VPA], which then provides the answer to the user over a local audio channel, not audible to the other people on the call. The VPA could take other actions as well, such as sending a file to the person at the other end of the call).
Regarding Claim 5, Voigt discloses the method of claim 1, further comprising:
in response to detecting the input, in accordance with a determination that the input is a second type of input, activating a second function of the wearable audio output device, wherein the second function is distinct from adjusting the mute state of the microphone (Voigt col. 2, line 62 to col. 3, line 8 discloses by comparing the user's voice as received at microphones on each side, different product features may be activated based on whether the user is speaking while looking ahead or to one side or the other, or on changes to where the user is looking between utterances. For example, if the user is on a call, and speaking while looking ahead, but then turns to their side and asks a question, the VUI [Voice User Interface] can mute the outbound call audio and provide the user's side question to a virtual personal assistant [VPA], which then provides the answer to the user over a local audio channel, not audible to the other people on the call. The VPA could take other actions as well, such as sending a file to the person at the other end of the call).
Regarding Claim 6, Voigt discloses the method of claim 5,
wherein the second function corresponds to changing a volume level of the wearable audio output device (Voigt Figs. 2 and 3 shows volume control)
Regarding Claim 7, Voigt discloses the method of claim 5,
wherein the second function corresponds to a digital assistant (Voigt col. 2, line 62 to col. 3, line 19 discloses by comparing the user's voice as received at microphones on each side, different product features may be activated based on whether the user is speaking while looking ahead or to one side or the other, or on changes to where the user is looking between utterances. For example, if the user is on a call, and speaking while looking ahead, but then turns to their side and asks a question, the VUI [Voice User Interface] can mute the outbound call audio and provide the user's side question to a virtual personal assistant [VPA], which then provides the answer to the user over a local audio channel, not audible to the other people on the call. In another example, the user may have two calls active at the same time [e.g., a conference call with a client and a side call to a colleague], and route audio to one or the other based on whether they speak straight ahead or to the side, and send commands and questions to their voice personal assistant by speaking to the other side).
Regarding Claim 8, Voigt discloses the method of claim 5,
wherein the second function corresponds to changing an audio output mode of the wearable audio output device (Voigt col. 3, lines 58-61 discloses if the user is facing to the right, the mic output to the call is muted 616, the call audio is ducked 618, and the microphone signals are routed 620 to a second call. Refer to Fig. 5).
Regarding Claim 10, Voigt discloses the method of claim 1,
wherein the wearable audio output device is in communication with a second wearable audio output device to form a set of wearable audio output devices (Voigt Fig. 4).
Regarding Claim 11, Voigt discloses the method of claim 10,
wherein adjusting the mute state of the microphone for the first audio function comprises adjusting the mute state for both the wearable audio output device and the second wearable audio output device in the set of wearable audio output devices (Voigt col. 3, lines 53-57 discloses if the user is facing to the left, the mic output to the call [associated with the center channel] is muted 608, the call audio is ducked 610, meaning it is decreased in level or muted, and the microphone output is routed 612 to the user's VPA).
Regarding Claim 28, Voigt discloses the method of claim 1,
wherein a companion device provides information about controlling the mute state of the microphone of the wearable audio output device in conjunction with a software setup or update procedure for the companion device (Voigt col. 3, line 62 to col. 4, line 2 discloses in addition to routing the voice and taking any actions instructed when the destination is a VPA, other pre-configured actions may be taken based on the direction the user is looking, or simply as a result of a change in direction. These include, for example, muting the microphone output to the connection associated with the direction the user is no longer facing, and ducking the audio output level of a call associated with that connection).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 9, 12-21, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Voigt et al. (US Patent #9949021) in view of Satongar et al. (US PGPUB #2023/0095263).
Regarding Claim 9, Voigt (title, abstract, Figs. 1-5) discloses the method of claim 1, but may not explicitly disclose wherein the mute state of the microphone is adjusted in response to detecting the input based on a first input mapping; and the method further comprises: after adjusting the mute state of the microphone in response to detecting the input, obtaining a second input mapping for the wearable audio output device; after obtaining the second input mapping, detecting a second input via the one or more input devices, the second input being the first type of input; and in response to detecting the second input, adjusting an audio output state of the wearable audio output device based on the second input mapping.
However, Satongar (Figs. 3B, 3D, 5A-7B) teaches wherein the mute state of the microphone is adjusted in response to detecting the input based on a first input mapping (Satongar ¶0244 discloses a clockwise input increases volume of the media and a counterclockwise input decreases volume or mutes volume of the media; e.g., the amount of volume change corresponds to the amount of rotation of the input and the rotational direction of the input); and
the method further comprises:
after adjusting the mute state of the microphone in response to detecting the input, obtaining a second input mapping for the wearable audio output device (Satongar ¶0244 discloses with respect to Figs. 5J-5L, a clockwise swipe input causes output volume of headphones 504 to be increased);
after obtaining the second input mapping, detecting a second input via the one or more input devices, the second input being the first type of input (Satongar ¶0247 discloses the input is a press and hold input, and the audio output device case displays [Fig 6: 648] an indication of a notification via the display component); and
in response to detecting the second input, adjusting an audio output state of the wearable audio output device based on the second input mapping (Satongar ¶0247 discloses while receiving the press and hold input via the one or more input devices, the audio output device case causes [e.g., by sending one or more commands or instructions to an audio source associated with the notification] the one or more audio output devices to play an audio notification corresponding to the indication via the display component. In some embodiments, when the press and hold is released the audio notification corresponding to the indication via the display component is paused. In some embodiments, if the press and hold is received again and the notification was previously not finished with its playback, the audio notification will resume from its paused time position. Alternatively, in some embodiments, once the input has duration that meets a threshold, the notification continues to be played, even if the input is released, but is paused or stop if a subsequent predefined input [e.g., a tap] is received. For example, as discussed herein with respect to an example shown in Figs. 5AG-5AH, in response to a press and hold input 588, an audio notification is played back, as indicated by information bubble 519 in Fig. 5AH).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 12, Voigt discloses the method of claim 10, but may not explicitly disclose further comprising: detecting a second input at the second wearable audio output device; and in response to detecting the second input, in accordance with a determination that the second input is the first type of input, activating a second function of the second wearable audio output device, wherein the second function of the second wearable audio output device is distinct from adjusting the mute state.
However, Satongar (Figs. 3B, 3D, 5A-7B) teaches detecting a second input at the second wearable audio output device; and in response to detecting the second input, in accordance with a determination that the second input is the first type of input, activating a second function of the second wearable audio output device, wherein the second function of the second wearable audio output device is distinct from adjusting the mute state (Satongar ¶0145 discloses wearable audio output device 301 conditionally outputs audio based on whether wearable audio output device 301 is in or near a user's ear [e.g., wearable audio output device 301 forgoes outputting audio when not in a user's ear, to reduce power usage]. In some embodiments where wearable audio output device 301 includes multiple [e.g., a pair] of wearable audio output components [e.g., earphones, earbuds, or earcups], each component includes one or more respective placement sensors, and wearable audio output device 301 conditionally outputs audio based on whether one or both components is in or near a user's ear. ¶0146 discloses in some embodiments, wearable audio output device 301 includes audio I/O logic 312, which determines the positioning or placement of wearable audio output device 301 relative to a user's ear based on information received from placement sensor(s) 304, and, in some embodiments, audio I/O logic 312 controls the resulting conditional outputting of audio. This claim relates to associating functions to the different input possibilities, that is considered a mere design choice).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 13, Voigt discloses the method of claim 10, but may not explicitly disclose further comprising:
detecting a second input at the second wearable audio output device; and in response to detecting the second input, in accordance with a determination that the second input is the first type of input, adjusting a mute state of a microphone of the second wearable audio output device for the first audio function without adjusting the mute state of the microphone of the second wearable audio output device for the second audio function.
However, Satongar (Figs. 3B, 3D, 5A-7B) teaches detecting a second input at the second wearable audio output device; and in response to detecting the second input, in accordance with a determination that the second input is the first type of input, adjusting a mute state of a microphone of the second wearable audio output device for the first audio function without adjusting the mute state of the microphone of the second wearable audio output device for the second audio function (Satongar ¶0145 discloses wearable audio output device 301 conditionally outputs audio based on whether wearable audio output device 301 is in or near a user's ear [e.g., wearable audio output device 301 forgoes outputting audio when not in a user's ear, to reduce power usage]. In some embodiments where wearable audio output device 301 includes multiple [e.g., a pair] of wearable audio output components [e.g., earphones, earbuds, or earcups], each component includes one or more respective placement sensors, and wearable audio output device 301 conditionally outputs audio based on whether one or both components is in or near a user's ear. ¶0146 discloses in some embodiments, wearable audio output device 301 includes audio I/O logic 312, which determines the positioning or placement of wearable audio output device 301 relative to a user's ear based on information received from placement sensor(s) 304, and, in some embodiments, audio I/O logic 312 controls the resulting conditional outputting of audio. This claim relates to associating functions to the different input possibilities, that is considered a mere design choice).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 14, Voigt discloses the method of claim 1, but may not explicitly disclose wherein the first type of input comprises a squeeze input.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the first type of input comprises a squeeze input (Satongar Figs. 5AD-5AF: receive squeeze; ¶0208. ¶0148 discloses the pressure-sensitive input device detects inputs from a user in response to the user squeezing the input device [e.g., by pinching the stem of wearable audio output device 301 between two fingers], Fig. 3C).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 15, Voigt discloses the method of claim 1, but may not explicitly disclose wherein the first type of input comprises a tap input.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the first type of input comprises a tap input (Satongar Fig. 7A: 714 the input is a tap input).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 16, Voigt discloses the method of claim 1, but may not explicitly disclose further comprising: in accordance with adjusting the mute state of the microphone, providing non-visual feedback to a user of the wearable audio output device indicating that the mute state of the microphone has been adjusted.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches in accordance with adjusting the mute state of the microphone, providing non-visual feedback to a user of the wearable audio output device (Satongar Fig. 6: 638 the audio output device case includes a haptic feedback generator) indicating that the mute state of the microphone has been adjusted (Satongar ¶0207 discloses Fig. 5AC illustrates in response to the request to switch audio sources, the headphone case outputting a haptic feedback via a haptic feedback generator integrated into the headphone case [e.g., as indicated by vibrational lines 574]. Fig. 5AC also illustrates that the current audio source has changed, as indicated by movie icon 576 displayed on the touch-sensitive display 502 of the headphone case 500. In other words, in response to the request to switch audio sources, both a visual feedback is displayed, and haptic feedback is output).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 17, Voigt in view of Satongar discloses the method of claim 16. But Voigt may not explicitly disclose wherein: in accordance with the first audio function corresponding to a first application, the non-visual feedback includes first audio feedback; and in accordance with the first audio function corresponding to a second application, the non-visual feedback includes second audio feedback.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein: in accordance with the first audio function corresponding to a first application, the non-visual feedback includes first audio feedback (Satongar ¶0220 discloses alternatively, or in addition, audio feedback is outputted via the headphones 504 in response to audio output from the audio book application transferring to the headphones 504; Fig. 5AU); and
wherein: in accordance with the first audio function corresponding to a second application, the non-visual feedback includes second audio feedback (Satongar ¶0265 discloses for example, as shown in Fig. 5AW, in response to input 5036 which invokes a fast-forward operations, … optionally, audio feedback is also provided, or provided instead, by the audio output device case).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 18, Voigt discloses the method of claim 1, but may not explicitly disclose wherein the wearable audio output device is communicatively coupled to an electronic device, and the method further comprises: in accordance with adjusting the mute state of the microphone, providing feedback to a user of the wearable audio output device via a display of the electronic device.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the wearable audio output device is communicatively coupled to an electronic device (Satongar Figs. 5A-5R: electronic device 500 and headphone 504 are communicatively coupled), and the method further comprises:
in accordance with adjusting the mute state of the microphone, providing feedback to a user of the wearable audio output device via a display of the electronic device (Satongar Figs. 5A-5F and 5H-5I: music icon 526-538, Figs. 5G: microphone icon 537 are displayed on the electronic device 500).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 19, Voigt in view of Satongar discloses the method of claim 18. But Voigt may not explicitly disclose wherein the feedback is displayed within a status region that displays information about one or more operations currently being performed by the electronic device.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the feedback is displayed within a status region that displays information about one or more operations currently being performed by the electronic device (Satongar Figs. 5A-5F and 5H-5I: music icon 526-538, Figs. 5G: microphone icon 537 are displayed on the electronic device 500).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 20, Voigt in view of Satongar discloses the method of claim 19. But Voigt may not explicitly disclose further comprising: detecting a second input at the electronic device; and in response to detecting the second input, adjusting a size of the status region.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches detecting a second input at the electronic device (Satongar ¶0247 discloses while receiving the press and hold input via the one or more input devices, the audio output device case causes [e.g., by sending one or more commands or instructions to an audio source associated with the notification] the one or more audio output devices to play an audio notification corresponding to the indication via the display component, Fig. 5AH); and
in response to detecting the second input, adjusting a size of the status region (Satongar Figs. 6B: displaying a plurality of controls, each at a different predefined control region, of a plurality of control regions, on the audio output device case 620 [i.e., adjusting the size of the status region]. ¶0070 discloses graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other display, including components for changing the visual impact [e.g., brightness, transparency, saturation, contrast or other visual property (such as, adjusting the size of the status region would be an obvious design choice)] of graphics that are displayed).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 21, Voigt in view of Satongar discloses the method of claim 19. But Voigt may not explicitly disclose wherein the status region includes an element that, when selected, adjusts the mute state of the microphone.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the status region includes an element that, when selected, adjusts the mute state of the microphone (Satongar ¶0244 discloses a clockwise input increases volume of the media and a counterclockwise input decreases volume or mutes volume of the media; e.g., the amount of volume change corresponds to the amount of rotation of the input and the rotational direction of the input).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Regarding Claim 25, Voigt in view of Satongar discloses the method of claim 18. But Voigt may not explicitly disclose wherein the first audio function corresponds to a first application, the feedback is displayed within a user interface of the first application.
However, Satongar (Figs. 3B-3D, 5A-7B) teaches wherein the first audio function corresponds to a first application, the feedback is displayed within a user interface of the first application (Satongar ¶0070 discloses graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other display, including components for changing the visual impact [e.g., brightness, transparency, saturation, contrast or other visual property (such as, adjusting the size of the status region would be an obvious design choice)] of graphics that are displayed).
Voigt and Satongar are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to provide simple to use controls for controlling the audio output volume, e.g., for playback of media, allow the user to quickly perform this commonly used operation, and reduce the number of devices the user is required to interact with (as taught by Satongar, ¶0244) in order to provide commands to a media source for playing media, which subsequently reduces the overall number of inputs to perform the operation (Satongar, ¶0244).
Claim 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Voigt et al. (US Patent #9949021) in view of Carrigan et al. (US #2021/0014610, hereinafter Carrigan ’610).
Regarding Claim 26, Voigt in view of Satongar discloses the method of claim 18. But Voigt may not explicitly disclose wherein providing the feedback includes: causing display of a notification user interface element at the electronic device; and ceasing to cause display of the notification user interface element after a preset time period.
However, Carrigan ’610 (title, abstract, Figs. 1-26) teaches causing display of a notification user interface element at the electronic device (Carrigan ’610 Figs. 5I-5U: 531-1 correct placement of left earbud in left ear; 531-2 change eartip on left earbud for a better fit, and replace earbud in ear; 531-4 fit test did not achieve desired thresholds -the first eartip provided best results); and
ceasing to cause display of the notification user interface element after a preset time period (Carrigan ’610 ¶0530 discloses the plurality of user interface elements is displayed [Fig. 18G: 1854] in response to receiving a prior input corresponding to activation of an output-mode affordance, wherein the output-mode affordance includes a representation of the first audio output mode without including representations of any other audio output modes of the wearable audio output device [e.g., to indicate that the first audio output mode is the current audio output mode of the wearable audio output device], and, after at least a predetermined amount of time has elapsed since detecting the first input, the computer system ceases to display the plurality of user interface elements and redisplays the output-mode affordance, where the output-mode affordance includes a representation of a respective audio output mode corresponding to a respective user interface element over which the selection indicator was displayed when the predetermined amount of time elapsed).
Voigt and Carrigan ’610 are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to redisplay the output-mode affordance, where the output-mode affordance includes a representation of a respective audio output mode corresponding to a respective user interface element over which the selection indicator was displayed when the predetermined amount of time elapsed (as taught by Carrigan ’610, ¶0530) where the fit of the wearable audio output devices in a user's ears is adjustable, and where audio output control can be performed using inputs at the wearable audio output devices (Carrigan ’610, ¶0002).
Claim 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Voigt et al. (US Patent #9949021) in view of Carrigan et al. (US #2022/0019403, hereinafter Carrigan ’403).
Regarding Claim 26, Voigt in view of Satongar discloses the method of claim 18. But Voigt may not explicitly disclose wherein providing the feedback includes: causing display of a notification user interface element at the electronic device; and ceasing to cause display of the notification user interface element after a preset time period.
However, Carrigan ’403 (title, abstract, Figs. 1-10H) teaches causing display of a notification user interface element at the electronic device (Carrigan ’403¶0240 discloses prior to displaying the user interface, in accordance with a determination that at least one of the first wearable audio output component or the second wearable audio output component is not in a respective position relative to an ear of a user [e.g., the respective position is an in-ear, on-ear, or over-the-ear position] [e.g., the first and second wearable audio output components are not both in the respective position relative to different ears of the user], the electronic device presents a notification [e.g., displaying a visual notification via the one or more display devices or outputting an audible notification via one or more of the audio output devices] prompting the user to place the first wearable audio output component in the respective position relative to a first ear of the user and the second wearable audio output component in the respective position relative to a second ear of the user [e.g., notification 530 in Fig. 5C is displayed in accordance with a determination that, as shown in Figs. 5B-5C, earbuds 502 are not both in ears 528 of the user]); and
ceasing to cause display of the notification user interface element after a preset time period (Carrigan ’403¶0201 discloses the textual indication 621 ceases to be displayed after a predetermined amount of time since it was first displayed, Figs. 6F-6G. ¶206 discloses as shown in Fig. 6N, mode control 616 in its expanded state is displayed over the portion of audio settings user interface 610 in which spatial audio toggle 618 was displayed, and spatial audio toggle 618 ceases to be displayed. ¶0240 discloses the notification ceases to be displayed automatically in response to detecting placement of both wearable audio output components in the respective position relative to the user's ears. ¶0276 discloses the electronic device automatically ceases (832) to display the indication whether the first audio output mode is enabled after occurrence of a predetermined condition [e.g., after a predetermined amount of time has elapsed since the first audio output mode was enabled, after a predetermined amount of time has elapsed since the first audio output mode was disabled, after a predetermined amount of time has elapsed since audio using the first audio output mode started to be output via one or more audio output devices in communication with the electronic device, after a predetermined amount of time has elapsed since audio using the first audio output mode ceased to be output via one or more audio output devices in communication with the electronic device, and/or after a predetermined amount of time has elapsed since the audio settings user interface was displayed]).
Voigt and Carrigan ’403are analogous art as they pertain to controlling wearable audio systems. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify audio selection based on head movement (as taught by Voigt) to perform an operation [e.g., automatically] when a set of conditions has been met enhances the operability of the devices and makes the user-device interface more efficient [e.g., by helping the user to achieve an intended outcome and reducing user mistakes when operating/interacting with the devices] (as taught by Carrigan ’403, ¶0273) which additionally, reduces power usage and improves battery life of the devices by enabling the user to use the devices more quickly and efficiently (Carrigan ’403, ¶0273).
Claims 27 and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Voigt et al. (US Patent #9949021).
Regarding Claim 27, Voigt discloses the method of claim 1, further comprising:
in accordance with a failure to adjust the mute state of the microphone, providing error feedback to a user of the wearable audio output device (It is not considered inventive since it relates to common functionality of user interfaces).
Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to provide an indication on the headphone (as taught by Voigt, col. 4, lines 16-21) by comparing the relative level of the microphone signals (Voigt, col. 4, lines 16-21) when mute adjustment fails.
Regarding Claim 29, Voigt discloses the method of claim 26,
wherein the information about controlling the mute state of the microphone of the wearable audio output device illustrates a simulated input and a simulated adjustment to the mute state of the microphone (It is not considered inventive in view of Voigt, that associates actions to for example tilting or rotating the head. It is straightforward to simulate those inputs on a visual interface in order to help the user understanding the different functions).
Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to control the mute state of the microphones (as taught by Voigt, col. 3, lines 3-9) to provides the answer to the user over a local audio channel, not audible [i.e., simulated] to the other people on the call (Voigt, col. 3, lines 3-9).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESHKUMAR G PATEL whose telephone number is (571)272-3957. The examiner can normally be reached 7:30 AM-4 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YOGESHKUMAR PATEL/Primary Examiner, Art Unit 2691