DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 13, 2025, has been entered.
Response to Arguments
Applicant’s arguments, filed November 13, 2025, with respect to claims 1, 4 – 9, 11, 14, 16, 18 and 20 – 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claim 16 is objected to because of the following informalities:
In claim 16, “the mixture value includes a selected on among multiple discrete values in the range” should read “the mixture value includes a selected one among multiple discrete values in the range”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 11, 14, 18 and 20 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (US Patent Application Publication No. 2023/0300532) in view of Ozcan (US Patent No. 9,143,875).
Regarding claim 1, Spittle discloses a non-transitory computer readable medium having stored thereon software instructions (Paragraph 0209, lines 9-12, “The main processor may include a dedicated memory for storing instructions executable by the processor to perform operations on data streams received by the processor.”) that, when executed by a processor, cause the processor to execute steps for monitoring an audio system (Paragraph 0209, lines 9-12, “The main processor may include a dedicated memory for storing instructions executable by the processor to perform operations on data streams received by the processor.”), the steps comprising:
operating the audio system (Paragraph 0009, lines 1-5, “Disclosed herein is an audio system that can be customized by the user (including the developer) and that may allow the user to control more than just the sound levels and may allow the user to select more than one of a few pre-determined settings.”);
determining a sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state (Paragraph 0718, lines 1-8, "The binaural noise cancellation processing will use the microphone signal and filter data at both ears to determine the desired noise reduction processing to be applied. This is shown in FIG. 48D. This allows the noise reduction for the left ear to be informed about the noise reduction applied to the right ear and vice versa. This information is used to set the corresponding noise reduction levels at each ear to create a balanced and more natural residual noise."; Paragraph 0011, lines 47-51, "Additionally or alternatively, in some embodiments, the plurality of processors further comprises: a mixer for mixing at least two data streams, the mixing comprising applying different gains to the at least two data streams."; Noise reduction reads on sound suppression, and mixing two data streams by applying different gains to the two data streams reads on a mixture value having a value in a range from a first value for a first state to a second value for a second state.),
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content (Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Paragraph 0513, lines 6-16, "The mixer can be used for data streams that use very low latency processing with fast transitions from input to output. This can be useful for applications that are providing noise cancellation signals for the wearer of a device. Some sounds may be cancelled, such as ambient sounds that the user is exposed to. An ambient processor 3102 may be used to cancel or reduce these sounds. Some sounds may pass through the ear piece speakers without being cancelled, such as music playback streams or phone call streams. For example, a digital mixer 3104 may be used to mix signals 3106A and 3106B to generate signal 3108."; A gain of 1 or unity reads on having substantially all of a first content, and cancelling sounds, such as ambient sounds, reads on having substantially nil amount of a second content.),
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content (Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Paragraph 0513, lines 6-16, "The mixer can be used for data streams that use very low latency processing with fast transitions from input to output. This can be useful for applications that are providing noise cancellation signals for the wearer of a device. Some sounds may be cancelled, such as ambient sounds that the user is exposed to. An ambient processor 3102 may be used to cancel or reduce these sounds. Some sounds may pass through the ear piece speakers without being cancelled, such as music playback streams or phone call streams. For example, a digital mixer 3104 may be used to mix signals 3106A and 3106B to generate signal 3108."; Paragraph 0508, lines 9-14, "The mixer could allow each user to have independent control of the balance of sounds while listening. Person A may want the audio to be louder than person B. Person A may want to hear more ambient sound than person B. Person A may have a different hearing profile to person B."; A gain of 1 or unity reads on having substantially all of a first content, and cancelling sounds, such as ambient sounds, reads on having substantially nil amount of a second content, where having independent control of the balance of sounds demonstrates that the amounts of the first content and the second content can be reversed.),
the mixture value including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents (Paragraph 0459, lines 1-14, "As shown in FIG. 15, the system provides control of the gain of an audio stream. This can be through a standard user interface, such as buttons, sliders, GUI etc. This can also be controlled through other signal processors in the system that are analyzing audio streams and then determine the gain to be applied for example. The gain can be applied in software as a DSP instruction or similar operation on a processor. The gain can be applied as a hardware macro on a silicon chip that is specifically designed as a mixer component. Two or more streams may have independent gains. For example, the system may apply a first gain A to a first signal A at 1402 and a second gain B to a second signal B at 1404, as shown in the figure. The streams are summed or mixed together at adder 1406."; Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Two or more streams having independent gains, where the gain can be a passthrough gain of 1 or unity, reads on sound having unprocessed first and second contents.),
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface on a portable device in wireless communication with a sound-generating device (Paragraph 0187, lines 1-24, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like. The electronic device 406 can be programmed to allow a user to transmit and/or receive data, including audio data, control data, and user information data, using a user interface (UI) (e.g., a graphical user interface (GUI)) presented on the electronic devices 406A-406C. For example, the user may use the UI to send control signals (e.g., including parameter information) to the electronic device 406. Exemplary information displayed by the UI includes, but is not limited to, the noise level, how long the user has been wearing the ear pieces, the battery life of the audio system, biometric information, etc. In some embodiments, the UI may be developed as a plugin for an electronic device 406 using the software development tools 200. In some embodiments, the plugin may include a corresponding processing plugin that is resident on the ear pieces that is being controlled."; An electronic device programmed to allow a user to transmit control data, where the electronic device can be a smartphone, reads on obtaining an input through a portable device in communication with a device that generates the sound.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Paragraph 0730, lines 1-12, "Signal processing instructions may be used to allow for low latency look back operations. If the history audio data is not available, a shorter time window may be used until a pre-determined number of audio data samples are available. In some embodiments, the signal processing instructions may be filled with zero audio data (representative of silence) while the history is being filled. A processed output sample may be generated for every new input sample. The lookback buffers may be used for each input audio stream, which can be the full band microphone signals or the output of a filter bank. Each input signal is processed using single sample processing."; A processed output sample generated for every input sample reads on sampling information representative of an input signal and an output signal.);
generating a control output signal based on the selected mixture value, and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value (Paragraph 0506, lines 1-10, "The user control of multiple streams of audio unmixed from ambient sound is shown in FIG. 31B. This shows that the user can individually and independently control the levels 3101A, 3101B, 3101C, and 3101D using independent control signals 3103A, 3103B, 3103C, 3103D, and 3103E. The target sounds, background sounds, ambient sounds, and noise sounds can be mixed. This may adjust the balance of ambient sounds that the user is hearing. This can be extended to more sound categories based on user preference and the specific scenario."; User control of multiple streams reads on generating a control output signal based on the selected mixture value, and the target sounds, background sounds, ambient sounds, and noise sounds being mixed reads on generating the output signal representative of a sound having the first content and/or the second content according to the selected mixture value.);
and providing a display representative of the sampled information during the operation of the audio system sound-generating device and the portable device (Paragraph 0470, lines 14-28, "The graphical interface could include a display level controller that has more functions than a simple volume control slider or fader. The slider has a target level. There is also a range marked around the slider where level of the signal is kept between. The user can drag the window up and down to set the target level. The upper and lower markers may move with the window, showing the compression input level range. The user can use multiple fingers to squeeze or expand the level range that sits around the target level. The user can independently set the upper and lower limits of the compression by dragging the individual markers up and down. The slider and threshold markers could be overlaid on an active level display, such as a PPM, that is providing an indication of the actual signal level of that audio stream at the device."; Paragraph 0679, lines 6-11, "The graphic may display time domain signals for the left ear and right ear, along with an amplitude spectrogram and phase information for each ear. In some embodiments, the graphic may also provide information such as the change in amplitude and phase for one more sound signals."; A graphical interface displaying time domain signals for the left ear and right ear, an amplitude spectrogram and phase information, and change in amplitude and phase for one more sound signals reads on providing a display representative of the sampled information during the operation of the audio system.).
Spittle does not specifically disclose: the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface.
Ozcan teaches:
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface (Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
Regarding claim 4, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1.
Spittle further discloses:
wherein the first content includes an ambient noise content, and the second content includes a speech content (Paragraph 0506, lines 1-10, "The user control of multiple streams of audio unmixed from ambient sound is shown in FIG. 31B. This shows that the user can individually and independently control the levels 3101A, 3101B, 3101C, and 3101D using independent control signals 3103A, 3103B, 3103C, 3103D, and 3103E. The target sounds, background sounds, ambient sounds, and noise sounds can be mixed. This may adjust the balance of ambient sounds that the user is hearing. This can be extended to more sound categories based on user preference and the specific scenario."; Paragraph 0637, lines 8-23, "The user may decide that some of the ambient sounds are classified as “background” sounds that they want to be aware of. Those sounds may inform the level control of the received speech information they are listening to from the communications network. Another extension is to use spatial rendering of the ambient sounds and the received speech stream from the communications network. This will be informed with head tracking information such that the received speech can be anchored to a location in the virtual space that makes sense for the listener, for example, away from other noise sources in the room. This would enable the listener to classify the received speech audio as “target” sounds. Further to this, ambient sound control processing may be used to eliminate sounds from the user’s experience, for example, sounds that are classified as “noise”."; Multiple streams of audio including target sounds, background sounds, ambient sounds, and noise sounds reads on the first content includes an ambient noise content, and target sounds, where the target sounds can be received speech audio, reads on the second content including a speech content.).
Regarding claim 11, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1.
Spittle further discloses:
wherein the sound-generating device is a headphone (Paragraph 0298, lines 1-3, "Embodiments may include changing the frequency content within each band for a headphone with multiple speaker drivers and outputs using a cross-over network.").
Regarding claim 14, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1.
Spittle further discloses:
wherein the portable device is a smartphone (Paragraph 0187, lines 1-9, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like.").
Regarding claim 18, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1.
Spittle further discloses:
wherein some or all of the determining of the sound suppression mode, sampling of the information, and providing of the display is/are performed by a portable device (Paragraph 0187, lines 1-24, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like. The electronic device 406 can be programmed to allow a user to transmit and/or receive data, including audio data, control data, and user information data, using a user interface (UI) (e.g., a graphical user interface (GUI)) presented on the electronic devices 406A-406C. For example, the user may use the UI to send control signals (e.g., including parameter information) to the electronic device 406. Exemplary information displayed by the UI includes, but is not limited to, the noise level, how long the user has been wearing the ear pieces, the battery life of the audio system, biometric information, etc. In some embodiments, the UI may be developed as a plugin for an electronic device 406 using the software development tools 200. In some embodiments, the plugin may include a corresponding processing plugin that is resident on the ear pieces that is being controlled.”; An electronic device programmed to allow a user to transmit control data, where the electronic device can be a smartphone, reads on the determining of the sound suppression mode being performed by a portable device, and information displayed by the user interface on the electronic device reads on providing the display being performed by a portable device.).
Regarding claim 20, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 18.
Spittle further discloses:
wherein the portable device includes an application having a graphic user interface that provides the display (Paragraph 0187, lines 1-24, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like. The electronic device 406 can be programmed to allow a user to transmit and/or receive data, including audio data, control data, and user information data, using a user interface (UI) (e.g., a graphical user interface (GUI)) presented on the electronic devices 406A-406C. For example, the user may use the UI to send control signals (e.g., including parameter information) to the electronic device 406. Exemplary information displayed by the UI includes, but is not limited to, the noise level, how long the user has been wearing the ear pieces, the battery life of the audio system, biometric information, etc. In some embodiments, the UI may be developed as a plugin for an electronic device 406 using the software development tools 200. In some embodiments, the plugin may include a corresponding processing plugin that is resident on the ear pieces that is being controlled."; An electronic device programmed to allow a user to transmit and receive data, including audio data, control data, and user information data using a graphical user interface, where the electronic device can be a smartphone, reads on the portable device including an application having a graphic user interface that provides the display.).
Regarding claim 21, Spittle discloses a system comprising:
an audio device including a speaker for providing an output sound to a user (Paragraph 0165, lines 1-6, "FIG. 1A illustrates an exemplary audio system in which embodiments of the disclosure can be implemented. Audio system 100 can include a left ear piece 100A and a right ear piece 100B. Each ear piece may include one or more speakers and one or more microphones 102 located within housing 104."),
and an audio processor configured to generate the output sound based on an audio signal (Paragraph 0166, lines 1-8, "The audio system 100 may include a monaural, a dual monaural, or a binaural device. A monaural device may include a single ear piece programmed to output sounds to a single ear. For example, as shown in FIG. 1B, ear piece 100A may be programmed to output sounds to ear 101A. In some embodiments, ear piece 100A may not be programmed to output sounds to ear 101B. The monaural device may also include a processor 103.");
and a portable device configured to communicate with the audio device (Paragraph 0187, lines 1-9, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like."),
the portable device including an application that allows the user to monitor the operation of the audio processor (Paragraph 0187, lines 9-24, "The electronic device 406 can be programmed to allow a user to transmit and/or receive data, including audio data, control data, and user information data, using a user interface (UI) (e.g., a graphical user interface (GUI)) presented on the electronic devices 406A-406C. For example, the user may use the UI to send control signals (e.g., including parameter information) to the electronic device 406. Exemplary information displayed by the UI includes, but is not limited to, the noise level, how long the user has been wearing the ear pieces, the battery life of the audio system, biometric information, etc. In some embodiments, the UI may be developed as a plugin for an electronic device 406 using the software development tools 200. In some embodiments, the plugin may include a corresponding processing plugin that is resident on the ear pieces that is being controlled."),
the portable device including a non-transitory computer readable medium having stored thereon software instructions (Paragraph 0209, lines 9-12, “The main processor may include a dedicated memory for storing instructions executable by the processor to perform operations on data streams received by the processor.”) that, when executed by a processor, cause the processor to execute steps including:
determining a sound suppression mode being implemented in the audio processor (Paragraph 0718, lines 1-8, "The binaural noise cancellation processing will use the microphone signal and filter data at both ears to determine the desired noise reduction processing to be applied. This is shown in FIG. 48D. This allows the noise reduction for the left ear to be informed about the noise reduction applied to the right ear and vice versa. This information is used to set the corresponding noise reduction levels at each ear to create a balanced and more natural residual noise."; Noise reduction reads on sound suppression.),
the determining of the sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state (Paragraph 0011, lines 47-51, "Additionally or alternatively, in some embodiments, the plurality of processors further comprises: a mixer for mixing at least two data streams, the mixing comprising applying different gains to the at least two data streams."; Mixing two data streams by applying different gains to the two data streams reads on a mixture value having a value in a range from a first value for a first state to a second value for a second state.),
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content (Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Paragraph 0513, lines 6-16, "The mixer can be used for data streams that use very low latency processing with fast transitions from input to output. This can be useful for applications that are providing noise cancellation signals for the wearer of a device. Some sounds may be cancelled, such as ambient sounds that the user is exposed to. An ambient processor 3102 may be used to cancel or reduce these sounds. Some sounds may pass through the ear piece speakers without being cancelled, such as music playback streams or phone call streams. For example, a digital mixer 3104 may be used to mix signals 3106A and 3106B to generate signal 3108."; A gain of 1 or unity reads on having substantially all of a first content, and cancelling sounds, such as ambient sounds, reads on having substantially nil amount of a second content.),
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content (Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Paragraph 0513, lines 6-16, "The mixer can be used for data streams that use very low latency processing with fast transitions from input to output. This can be useful for applications that are providing noise cancellation signals for the wearer of a device. Some sounds may be cancelled, such as ambient sounds that the user is exposed to. An ambient processor 3102 may be used to cancel or reduce these sounds. Some sounds may pass through the ear piece speakers without being cancelled, such as music playback streams or phone call streams. For example, a digital mixer 3104 may be used to mix signals 3106A and 3106B to generate signal 3108."; Paragraph 0508, lines 9-14, "The mixer could allow each user to have independent control of the balance of sounds while listening. Person A may want the audio to be louder than person B. Person A may want to hear more ambient sound than person B. Person A may have a different hearing profile to person B."; A gain of 1 or unity reads on having substantially all of a first content, and cancelling sounds, such as ambient sounds, reads on having substantially nil amount of a second content, where having independent control of the balance of sounds demonstrates that the amounts of the first content and the second content can be reversed.),
the mixture value including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents (Paragraph 0459, lines 1-14, "As shown in FIG. 15, the system provides control of the gain of an audio stream. This can be through a standard user interface, such as buttons, sliders, GUI etc. This can also be controlled through other signal processors in the system that are analyzing audio streams and then determine the gain to be applied for example. The gain can be applied in software as a DSP instruction or similar operation on a processor. The gain can be applied as a hardware macro on a silicon chip that is specifically designed as a mixer component. Two or more streams may have independent gains. For example, the system may apply a first gain A to a first signal A at 1402 and a second gain B to a second signal B at 1404, as shown in the figure. The streams are summed or mixed together at adder 1406."; Paragraph 0480, lines 17-24, "If the user requires the second audio stream to be amplified as well, a second compressor is applied at the input to the mixer for the second audio stream B, creating now two independent gain control stages feeding the output that is still at “passthrough” gain of 1 or unity or no gain. The user can create a new separate output gain control stage, in some embodiments."; Two or more streams having independent gains, where the gain can be a passthrough gain of 1 or unity, reads on sound having unprocessed first and second contents.),
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface on a portable device in wireless communication with a sound-generating device (Paragraph 0187, lines 1-24, "The hardware development tools 400 may transmit audio data to and receive audio data from other sources. FIG. 3C illustrates a block diagram of exemplary electronic devices communicating with the hardware development tools, according to embodiments of the disclosure. The hardware development tools 400 are capable of communicating with one or more electronic devices 406, such as smartphone 406A, smartwatch 406B, tablet 406C, personal computer, or the like. The electronic device 406 can be programmed to allow a user to transmit and/or receive data, including audio data, control data, and user information data, using a user interface (UI) (e.g., a graphical user interface (GUI)) presented on the electronic devices 406A-406C. For example, the user may use the UI to send control signals (e.g., including parameter information) to the electronic device 406. Exemplary information displayed by the UI includes, but is not limited to, the noise level, how long the user has been wearing the ear pieces, the battery life of the audio system, biometric information, etc. In some embodiments, the UI may be developed as a plugin for an electronic device 406 using the software development tools 200. In some embodiments, the plugin may include a corresponding processing plugin that is resident on the ear pieces that is being controlled."; An electronic device programmed to allow a user to transmit control data, where the electronic device can be a smartphone, reads on obtaining an input through a portable device in communication with a device that generates the sound.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Paragraph 0730, lines 1-12, "Signal processing instructions may be used to allow for low latency look back operations. If the history audio data is not available, a shorter time window may be used until a pre-determined number of audio data samples are available. In some embodiments, the signal processing instructions may be filled with zero audio data (representative of silence) while the history is being filled. A processed output sample may be generated for every new input sample. The lookback buffers may be used for each input audio stream, which can be the full band microphone signals or the output of a filter bank. Each input signal is processed using single sample processing."; A processed output sample generated for every input sample reads on sampling information representative of an input signal and an output signal.),
generating a control output signal based on the selected mixture value, and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value (Paragraph 0506, lines 1-10, "The user control of multiple streams of audio unmixed from ambient sound is shown in FIG. 31B. This shows that the user can individually and independently control the levels 3101A, 3101B, 3101C, and 3101D using independent control signals 3103A, 3103B, 3103C, 3103D, and 3103E. The target sounds, background sounds, ambient sounds, and noise sounds can be mixed. This may adjust the balance of ambient sounds that the user is hearing. This can be extended to more sound categories based on user preference and the specific scenario."; User control of multiple streams reads on generating a control output signal based on the selected mixture value, and the target sounds, background sounds, ambient sounds, and noise sounds being mixed reads on generating the output signal representative of a sound having the first content and/or the second content according to the selected mixture value.);
and providing a display representative of the sampled information during the operation of the audio processor (Paragraph 0470, lines 14-28, "The graphical interface could include a display level controller that has more functions than a simple volume control slider or fader. The slider has a target level. There is also a range marked around the slider where level of the signal is kept between. The user can drag the window up and down to set the target level. The upper and lower markers may move with the window, showing the compression input level range. The user can use multiple fingers to squeeze or expand the level range that sits around the target level. The user can independently set the upper and lower limits of the compression by dragging the individual markers up and down. The slider and threshold markers could be overlaid on an active level display, such as a PPM, that is providing an indication of the actual signal level of that audio stream at the device."; Paragraph 0679, lines 6-11, "The graphic may display time domain signals for the left ear and right ear, along with an amplitude spectrogram and phase information for each ear. In some embodiments, the graphic may also provide information such as the change in amplitude and phase for one more sound signals."; A graphical interface displaying time domain signals for the left ear and right ear, an amplitude spectrogram and phase information, and change in amplitude and phase for one more sound signals reads on providing a display representative of the sampled information during the operation of the audio processor.).
Spittle does not specifically disclose: the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface.
Ozcan teaches:
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface (Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
Claims 5 – 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle in view of Ozcan, and further in view of Hamalainen (US Patent No. 11,676,568).
Regarding claim 5, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1, but does not specifically disclose: wherein the range is selected such that the first value is -Mlimit and the second value is +Mlimit.
Hamalainen teaches:
wherein the range is selected such that the first value is -Mlimit and the second value is +Mlimit (Column 3, lines 59-67, "The apparatus, processor and/or memory may be configured to receive a primary audio signal from a primary audio source. The apparatus may be configured to combine the primary audio signal with the altered background audio signal/noise cancellation signal to produce a combined audio signal. Accordingly, there is provided an apparatus (e.g. an audio headset) with user-controlled augmented reality audio (ARA) and active noise cancellation (ANC) functionalities."; Column 12, lines 9-20, "Each section 632, 633 comprises a slider 634 for varying the audio signal. Each slider can be independently moved between three main settings (+1, 0, and -1). The “+1” setting makes the headset acoustically transparent by turning the ARA functionality on and the ANC functionality off, the “0” setting turns both the ARA and the ANC functionality off, whilst the “-1” setting isolates the user from the acoustic environment by turning the ARA functionality off and the ANC functionality on. Advantageously, the sliders 634 may allow discrete or continuous selection. In FIG. 6, each slider 634 can be positioned arbitrarily between the three main settings (i.e. continuous selection)."; Combining a primary audio signal with an altered background audio signal, where the audio signal is controlled by a control setting with values between -1 and +1, reads on a range selected such that the first value is -Mlimit and the second value is +Mlimit.).
Hamalainen is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle in view of Ozcan to incorporate the teachings of Hamalainen to combine a primary audio signal with an altered background audio signal, where the audio signal is controlled by a control setting with values between -1 and +1. Doing so would allow for selecting between blocking out some or all of the background sounds and hearing some of the background sounds (Hamalainen; Column 2, lines 15-21).
Regarding claim 6, Spittle in view of Ozcan and Hamalainen discloses the non-transitory computer readable medium as claimed in claim 5.
Hamalainen further teaches:
wherein the control output signal is represented as Output = (Mlimit - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed) with f representing a sound suppression function and mix representing the selected mixture value (Column 12, lines 21-36, "When the sliders 634 are moved to the “+1” setting, the apparatus behaves as an ARA system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record a pseudo-acoustic representation of the surrounding environment superimposed by the primary audio signal. When the sliders 634 are moved to the “0” setting, the apparatus behaves as a regular audio system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal, but some of the ambient noise is also heard, sent and recorded. When the sliders 634 are moved to the “-1” setting, the apparatus behaves as an ANC system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal without any of the ambient noise."; A setting of +1 selecting augmented reality audio, a setting of -1 selecting active noise cancellation audio, and a setting of 0 selecting primary audio signal combined with the ambient noise, reads on the control output signal representing Output = (Mlimit - abs(mix)) * unprocessed + abs(mix) * processed, where the augmented reality audio reads on processed audio and the primary audio signal combined with the ambient noise reads on unprocessed audio.).
Hamalainen is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle in view of Ozcan and Hamalainen to further incorporate the teachings of Hamalainen to have a setting of +1 select augmented reality audio, a setting of -1 select active noise cancellation audio, and a setting of 0 select primary audio signal combined with the ambient noise. Doing so would allow for selecting between blocking out some or all of the background sounds and hearing some of the background sounds (Hamalainen; Column 2, lines 15-21).
Regarding claim 7, Spittle in view of Ozcan and Hamalainen discloses the non-transitory computer readable medium as claimed in claim 6.
Spittle further discloses:
wherein the sound suppression function includes an artificial intelligence sound suppression function (Paragraph 0349, lines 18-24, "Another example could be a neural network based echo cancellation plugin that uses an input audio stream that potentially includes an echo signal, as well as multiple reference signals used to identify the echo. The neural network is used to eliminate the echo signals from the input audio stream to create a clean output audio stream."; Neural network echo cancellation reads on an artificial intelligence sound suppression function.).
Regarding claim 8, Spittle in view of Ozcan and Hamalainen discloses the non-transitory computer readable medium as claimed in claim 6.
Hamalainen further teaches:
wherein the quantity Mlimit has a value of 1, such that the control output signal is represented as Output = (1 - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed) (Column 12, lines 21-36, "When the sliders 634 are moved to the “+1” setting, the apparatus behaves as an ARA system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record a pseudo-acoustic representation of the surrounding environment superimposed by the primary audio signal. When the sliders 634 are moved to the “0” setting, the apparatus behaves as a regular audio system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal, but some of the ambient noise is also heard, sent and recorded. When the sliders 634 are moved to the “-1” setting, the apparatus behaves as an ANC system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal without any of the ambient noise."; A setting of +1 selecting augmented reality audio, a setting of -1 selecting active noise cancellation audio, and a setting of 0 selecting primary audio signal combined with the ambient noise, reads on the control output signal representing quantity Mlimit has a value of 1, such that the control output signal is represented as Output = (1 - abs(mix)) * unprocessed + abs(mix) * processed, where the augmented reality audio reads on processed audio and the primary audio signal combined with the ambient noise reads on unprocessed audio.).
Hamalainen is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle in view of Ozcan and Hamalainen to further incorporate the teachings of Hamalainen to have a setting of +1 select augmented reality audio, a setting of -1 select active noise cancellation audio, and a setting of 0 select primary audio signal combined with the ambient noise. Doing so would allow for selecting between blocking out some or all of the background sounds and hearing some of the background sounds (Hamalainen; Column 2, lines 15-21).
Regarding claim 9, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1, but does not specifically disclose: wherein the range is selected such that the unprocessed mixture value is approximately at middle of the range.
Hamalainen teaches:
wherein the range is selected such that the unprocessed mixture value is approximately at middle of the range (Column 12, lines 21-36, "When the sliders 634 are moved to the “+1” setting, the apparatus behaves as an ARA system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record a pseudo-acoustic representation of the surrounding environment superimposed by the primary audio signal. When the sliders 634 are moved to the “0” setting, the apparatus behaves as a regular audio system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal, but some of the ambient noise is also heard, sent and recorded. When the sliders 634 are moved to the “-1” setting, the apparatus behaves as an ANC system. In this mode, the loudspeaker 520, transmitter 525 and storage medium 527 respectively reproduce, send and record the primary audio signal without any of the ambient noise."; A setting of +1 selecting augmented reality audio, a setting of -1 selecting active noise cancellation audio, and a setting of 0 selecting primary audio signal combined with the ambient noise, reads on selecting a range such that the unprocessed mixture value is approximately at middle of the range.
Hamalainen is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle in view of Ozcan to incorporate the teachings of Hamalainen to have a setting of +1 select augmented reality audio, a setting of -1 select active noise cancellation audio, and a setting of 0 select primary audio signal combined with the ambient noise. Doing so would allow for selecting between blocking out some or all of the background sounds and hearing some of the background sounds (Hamalainen; Column 2, lines 15-21).
Regarding claim 16, Spittle in view of Ozcan discloses the non-transitory computer readable medium as claimed in claim 1, but does not specifically disclose: wherein the mixture value includes a selected one among multiple discrete values in the range.
Hamalainen teaches:
wherein the mixture value includes a selected one among multiple discrete values in the range (Column 3, lines 59-67, "The apparatus, processor and/or memory may be configured to receive a primary audio signal from a primary audio source. The apparatus may be configured to combine the primary audio signal with the altered background audio signal/noise cancellation signal to produce a combined audio signal. Accordingly, there is provided an apparatus (e.g. an audio headset) with user-controlled augmented reality audio (ARA) and active noise cancellation (ANC) functionalities."; Column 12, lines 9-20, "Each section 632, 633 comprises a slider 634 for varying the audio signal. Each slider can be independently moved between three main settings (+1, 0, and -1). The “+1” setting makes the headset acoustically transparent by turning the ARA functionality on and the ANC functionality off, the “0” setting turns both the ARA and the ANC functionality off, whilst the “-1” setting isolates the user from the acoustic environment by turning the ARA functionality off and the ANC functionality on. Advantageously, the sliders 634 may allow discrete or continuous selection. In FIG. 6, each slider 634 can be positioned arbitrarily between the three main settings (i.e. continuous selection)."; Combining a primary audio signal with an altered background audio signal, where the audio signal is controlled by a control setting with values between -1 and +1, where sliders allow discrete selection, reads on the multiple values in the range being discrete values.).
Hamalainen is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spittle in view of Ozcan to incorporate the teachings of Hamalainen to combine a primary audio signal with an altered background audio signal, where the audio signal is controlled by a control setting with values between -1 and +1, where sliders allow discrete selection. Doing so would allow for selecting between blocking out some or all of the background sounds and hearing some of the background sounds (Hamalainen; Column 2, lines 15-21).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 4 – 9, 11, 14, 16, 18 and 20 – 21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 7, 9 – 10, 12 – 14 and 29 of copending Application No. 18/090,975 in view of Ozcan.
Regarding claim 1, claims 1, 10 and 13 of copending Application No. 18/090,975 claims all the limitations set forth in the application claim 1 except for the limitations “a non-transitory computer readable medium having stored thereon software instructions“, “the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface”, “sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode”, and “providing a display representative of the sampled information”.
US Application No. 18/093,122 Claim 1
US Application No. 18/090,975 Claim 1
A non-transitory computer readable medium having stored thereon software instructions that, when executed by a processor, cause the processor to execute steps for monitoring an audio system, the steps comprising:
A method for controlling an audio system, the method comprising:
determining a sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state,
obtaining a mixture value from a user, the mixture value having a value in a range from a first value for a first state to a second value for a second state,
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content,
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content,
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content,
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content,
the mixture value including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
the mixture value being a selected one among multiple values in the range, the multiple values including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
generating a control output signal based on the selected mixture value,
generating a control output signal based on the selected mixture value;
and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value;
and processing an audio signal based on the control output signal to generate a sound having the first content and/or the second content according to the selected mixture value.
US Application No. 18/093,122 Claim 1
US Application No. 18/090,975 Claim 10
the obtaining of the mixture value including obtaining an input [through a [sliding scale graphic user interface] on a portable device in wireless communication with a sound-generating device;
wherein the obtaining of the mixture value includes obtaining an input through a portable device in communication with a device that generates the sound
US Application No. 18/093,122 Claim 1
US Application No. 18/090,975 Claim 13
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface on a portable device [in wireless communication with a sound-generating device];
wherein the obtaining of the input through the portable device includes providing a graphic user interface that allows the user to select the mixture value
Ozcan teaches:
a non-transitory computer readable medium having stored thereon software instructions (Column 1, lines 29-32, “One or more embodiments may provide an apparatus, a computer readable medium, a non-transitory computer readable medium, a computer program product, and a method for receiving audio information”; Column 28, lines 10-12, “In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.”);
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface (Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Column 11, lines 28-30, “In at least one example embodiment, apparatus 201 receives audio information from microphone 202.”; Column 11, lines 39-51, “The basic idea of estimating the ambient sound may be to analyze input signal frames during periods associated with lack of speech activity. For example, it may be determined whether a current frame contains speech and/or ambient sound. In such an example, the output of the VAD may be desirable. In at least one example embodiment, based on the VAD information, ambient sound may be suppressed for quality and intelligibility of speech signal. In some apparatuses with multi-microphone configurations it may be desirable to provide an advanced level of noise suppression or directionality. For example, it may be desirable that uplink audio emphasizes speech by reducing ambient sound.”; Analyzing input signal frames reads on sampling information representative of an input signal, and utilizing the voice activity detection output to suppress ambient sound reads on sampling information representative of an output signal resulting from processing of the input signal in the sound suppression mode.);
and providing a display representative of the sampled information during the operation of the audio system sound-generating device and the portable device (Column 19, lines 4-24, “In at least one example embodiment, the apparatus causes display of an interface element associated with an ambient sound directive. In at least one example embodiment, an interface element relates to a visual representation of information that is indicative of information with which a user may interact. For example, an interface element may be an icon, a button, a hyperlink, text, and/or the like. In at least one example embodiment, the interface element may be associated with an ambient sound directive by indicating the ambient sound directive, by way of being associated with invocation of the ambient sound directive, and/or the like. For example, the interface element may indicate, by way of text, image, etc., that the interface element represents a setting for an ambient sound directive. In at least one example embodiment, causing display relates to performance of an operation that results in the interface element being displayed. For example, causing display may comprise displaying the interface element on a display, sending information indicative of the interface element to a separate apparatus so that the separate apparatus displays the interface element, and/or the like.”; Displaying an interface element associated with an ambient sound directive reads on providing a display representative of the sampled information during the operation of the audio system.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified copending Application No. 18/090,975 to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
Regarding claim 4, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 2 of copending Application No. 18/090,975 further claims the additional limitation of claim 4.
US Application No. 18/093,122 Claim 4
US Application No. 18/090,975 Claim 2
wherein the first content includes an ambient noise content, and the second content includes a speech content
wherein the first content includes an ambient noise content, and the second content includes a speech content
Regarding claim 5, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 3 of copending Application No. 18/090,975 further claims the additional limitation of claim 5.
US Application No. 18/093,122 Claim 5
US Application No. 18/090,975 Claim 3
wherein the range is selected such that the first value is -Mlimit and the second value is +Mlimit
wherein the range is selected such that the first value is -Miimit and the second value is +Miimit
Regarding claim 6, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 5. Claim 4 of copending Application No. 18/090,975 further claims the additional limitation of claim 6.
US Application No. 18/093,122 Claim 6
US Application No. 18/090,975 Claim 4
wherein the control output signal is represented as Output = (Mlimit - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed) with f representing a sound suppression function and mix representing the selected mixture value
wherein the control output signal is represented as Output = (Milmit - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed) with f representing a sound suppression function and mix representing the selected mixture value
Regarding claim 7, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 6. Claim 5 of copending Application No. 18/090,975 further claims the additional limitation of claim 7.
US Application No. 18/093,122 Claim 7
US Application No. 18/090,975 Claim 5
wherein the sound suppression function includes an artificial intelligence sound suppression function
wherein the sound suppression function includes an artificial intelligence sound suppression function
Regarding claim 8, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 6. Claim 6 of copending Application No. 18/090,975 further claims the additional limitation of claim 8.
US Application No. 18/093,122 Claim 8
US Application No. 18/090,975 Claim 6
wherein the quantity Mlimit has a value of 1, such that the control output signal is represented as Output = (1 - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed)
wherein the quantity Mimit has a value of 1, such that the control output signal is represented as Output = (1 - abs(mix)) * unprocessed + abs(mix) * processed, where processed = f(unprocessed)
Regarding claim 9, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 7 of copending Application No. 18/090,975 further claims the additional limitation of claim 9.
US Application No. 18/093,122 Claim 9
US Application No. 18/090,975 Claim 7
wherein the range is selected such that the unprocessed mixture value is approximately at middle of the range
wherein the range is selected such that the unprocessed mixture value is approximately at middle of the range
Regarding claim 11, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 9 of copending Application No. 18/090,975 further claims the additional limitation of claim 11.
US Application No. 18/093,122 Claim 11
US Application No. 18/090,975 Claim 9
wherein the sound-generating device is a headphone
wherein the sound-generating device is a headphone
Regarding claim 14, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 12 of copending Application No. 18/090,975 further claims the additional limitation of claim 14.
US Application No. 18/093,122 Claim 14
US Application No. 18/090,975 Claim 12
wherein the portable device is a smartphone and the sound-generating device is a headphone
wherein the portable device is a smartphone and the sound-generating device is a headphone
Regarding claim 16, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 14 of copending Application No. 18/090,975 further claims the additional limitation of claim 16.
US Application No. 18/093,122 Claim 16
US Application No. 18/090,975 Claim 14
wherein the multiple values in the range are discrete values
wherein the multiple values in the range are discrete values
Regarding claim 18, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 13 of copending Application No. 18/090,975 further claims the additional limitation of claim 18.
US Application No. 18/093,122 Claim 18
US Application No. 18/090,975 Claim 13
wherein some or all of the determining of the sound suppression mode, sampling of the information, and providing of the display is/are performed by a portable device
wherein the obtaining of the input through the portable device includes providing a graphic user interface that allows the user to select the mixture value
Regarding claim 20, copending Application 18/090,975 in view of Ozcan claims all the limitations set forth in the application claim 18.
Ozcan further teaches:
wherein the portable device includes an application having a graphic user interface that provides the display (Column 8, lines 37-46, “The electronic apparatus 10 may comprise a user interface for providing output and/or receiving input. The electronic apparatus 10 may comprise an output device 14. Output device 14 may comprise an audio output device, such as a ringer, an earphone, a speaker, and/or the like. Output device 14 may comprise a tactile output device, such as a vibration transducer, an electronically deformable surface, an electronically deformable structure, and/or the like. Output Device 14 may comprise a visual output device, such as a display, a light, and/or the like.”).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified copending Application No. 18/090,975 to incorporate the teachings of Ozcan to implement a slider interface element on a graphic user interface that provides a display, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
Regarding claim 21, claims,10, 13 and 29 of copending Application No. 18/090,975 claims all the limitations set forth in the application claim 21 except for the limitations “the portable device including a non-transitory computer readable medium having stored thereon software instructions”, “the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface “, “sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode”, and “providing a display representative of the sampled information during the operation of the audio processor”.
US Application No. 18/093,122 Claim 21
US Application No. 18/090,975 Claim 29
A system comprising:
A system comprising:
an audio device including a speaker for providing an output sound to a user, and an audio processor configured to generate the output sound based on an audio signal;
an audio device including a speaker for providing an output sound to a user, an audio processor configured to generate the output sound based on an audio signal,
and a portable device configured to communicate with the audio device, the portable device including an application that allows the user to monitor the operation of the audio processor,
and a portable device configured to communicate with the audio device, the portable device including an application that allows the user to select the mixture value.
determining a sound suppression mode being implemented in the audio processor, the determining of the sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state
and a controller configured to obtain a mixture value in a range from a first value for a first state to a second value for a second state,
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content,
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content,
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content,
the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content,
the mixture value including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
the mixture value being a selected one among multiple values in the range, the multiple values including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
generating a control output signal based on the selected mixture value,
generating a control output signal based on the selected mixture value;
and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value;
and processing an audio signal based on the control output signal to generate a sound having the first content and/or the second content according to the selected mixture value.
US Application No. 18/093,122 Claim 21
US Application No. 18/090,975 Claim 10
the obtaining of the mixture value including obtaining an input [through a [sliding scale graphic user interface] on a portable device in wireless communication with a sound-generating device;
wherein the obtaining of the mixture value includes obtaining an input through a portable device in communication with a device that generates the sound
US Application No. 18/093,122 Claim 21
US Application No. 18/090,975 Claim 13
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface on a portable device [in wireless communication with a sound-generating device];
wherein the obtaining of the input through the portable device includes providing a graphic user interface that allows the user to select the mixture value
Ozcan teaches:
the portable device including a non-transitory computer readable medium having stored thereon software instructions (Column 1, lines 29-32, “One or more embodiments may provide an apparatus, a computer readable medium, a non-transitory computer readable medium, a computer program product, and a method for receiving audio information”; Column 28, lines 10-12, “In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.”);
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface (Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Column 11, lines 28-30, “In at least one example embodiment, apparatus 201 receives audio information from microphone 202.”; Column 11, lines 39-51, “The basic idea of estimating the ambient sound may be to analyze input signal frames during periods associated with lack of speech activity. For example, it may be determined whether a current frame contains speech and/or ambient sound. In such an example, the output of the VAD may be desirable. In at least one example embodiment, based on the VAD information, ambient sound may be suppressed for quality and intelligibility of speech signal. In some apparatuses with multi-microphone configurations it may be desirable to provide an advanced level of noise suppression or directionality. For example, it may be desirable that uplink audio emphasizes speech by reducing ambient sound.”; Analyzing input signal frames reads on sampling information representative of an input signal, and utilizing the voice activity detection output to suppress ambient sound reads on sampling information representative of an output signal resulting from processing of the input signal in the sound suppression mode.);
and providing a display representative of the sampled information during the operation of the audio processor (Column 19, lines 4-24, “In at least one example embodiment, the apparatus causes display of an interface element associated with an ambient sound directive. In at least one example embodiment, an interface element relates to a visual representation of information that is indicative of information with which a user may interact. For example, an interface element may be an icon, a button, a hyperlink, text, and/or the like. In at least one example embodiment, the interface element may be associated with an ambient sound directive by indicating the ambient sound directive, by way of being associated with invocation of the ambient sound directive, and/or the like. For example, the interface element may indicate, by way of text, image, etc., that the interface element represents a setting for an ambient sound directive. In at least one example embodiment, causing display relates to performance of an operation that results in the interface element being displayed. For example, causing display may comprise displaying the interface element on a display, sending information indicative of the interface element to a separate apparatus so that the separate apparatus displays the interface element, and/or the like.”; Displaying an interface element associated with an ambient sound directive reads on providing a display representative of the sampled information during the operation of the audio system.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified copending Application No. 18/090,975 to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
This is a provisional nonstatutory double patenting rejection.
Claims 1, 4, 16, and 21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 3, 9 – 10, 12, 14 and 20 – 21 of copending Application No. 18,651,705 in view of Ozcan.
Regarding claim 1, claims 1, 9 – 10 and 12 of copending Application No. 18,651,705 claims all the limitations set forth in the application claim 1 except for the limitations “a non-transitory computer readable medium having stored thereon software instructions“, “the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface on a portable device in wireless communication with a sound-generating device”, “sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode”, and “providing a display representative of the sampled information”.
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 1
A non-transitory computer readable medium having stored thereon software instructions that, when executed by a processor, cause the processor to execute steps for monitoring an audio system, the steps comprising:
A method for processing audio signals, the method comprising:
determining a sound suppression mode;
providing a gain for each of the first and second audio components to result in a respective gain adjusted audio component;
generating a control output signal based on the selected mixture value,
providing a gain for each of the first and second audio components to result in a respective gain adjusted audio component;
and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value
and combining the first and second gain adjusted audio components to provide a processed audio signal, the gains of the first and second audio components configured so that a selected one of the first and second audio components has improved intelligibility by a listener when the processed audio signal is converted into sound by a speaker
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 9
determining a sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state,
wherein the providing of the gain for each of the first and second audio components includes providing suppression or no suppression of the respective audio component
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 10
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content, the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content, the mixture value being a selected one among multiple values in the range, the multiple values including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
wherein the providing of the suppression includes a suppression range such that the suppressed audio component has a level in a range between first and second levels, the first level being less than a level associated with no suppression, the second level being greater than or equal to a level associated with complete suppression.
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 12
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface;
wherein the providing of the gain for each of the first and second audio components includes receiving gain information from a user interface.
Ozcan teaches:
a non-transitory computer readable medium having stored thereon software instructions (Column 1, lines 29-32, “One or more embodiments may provide an apparatus, a computer readable medium, a non-transitory computer readable medium, a computer program product, and a method for receiving audio information”; Column 28, lines 10-12, “In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.”);
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface on a portable device in wireless communication with a sound-generating device (Column 7, lines 3-8, “Electronic apparatus 10 may be a portable digital assistant (PDAs), a pager, a mobile computer, a desktop computer, a television, a gaming apparatus, a laptop computer, a media player, a camera, a video recorder, a mobile phone, a global positioning system (GPS) apparatus, and/or any other types of electronic systems.”; Column 7, lines 43-47, “The electronic apparatus 10 may further comprise a communication device 15. In at least one example embodiment, communication device 15 comprises an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter and/or a receiver.”; Column 8, lines 37-46, “The electronic apparatus 10 may comprise a user interface for providing output and/or receiving input. The electronic apparatus 10 may comprise an output device 14. Output device 14 may comprise an audio output device, such as a ringer, an earphone, a speaker, and/or the like. Output device 14 may comprise a tactile output device, such as a vibration transducer, an electronically deformable surface, an electronically deformable structure, and/or the like. Output Device 14 may comprise a visual output device, such as a display, a light, and/or the like.”; Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Column 11, lines 28-30, “In at least one example embodiment, apparatus 201 receives audio information from microphone 202.”; Column 11, lines 39-51, “The basic idea of estimating the ambient sound may be to analyze input signal frames during periods associated with lack of speech activity. For example, it may be determined whether a current frame contains speech and/or ambient sound. In such an example, the output of the VAD may be desirable. In at least one example embodiment, based on the VAD information, ambient sound may be suppressed for quality and intelligibility of speech signal. In some apparatuses with multi-microphone configurations it may be desirable to provide an advanced level of noise suppression or directionality. For example, it may be desirable that uplink audio emphasizes speech by reducing ambient sound.”; Analyzing input signal frames reads on sampling information representative of an input signal, and utilizing the voice activity detection output to suppress ambient sound reads on sampling information representative of an output signal resulting from processing of the input signal in the sound suppression mode.);
and providing a display representative of the sampled information during the operation of the audio system sound-generating device and the portable device (Column 19, lines 4-24, “In at least one example embodiment, the apparatus causes display of an interface element associated with an ambient sound directive. In at least one example embodiment, an interface element relates to a visual representation of information that is indicative of information with which a user may interact. For example, an interface element may be an icon, a button, a hyperlink, text, and/or the like. In at least one example embodiment, the interface element may be associated with an ambient sound directive by indicating the ambient sound directive, by way of being associated with invocation of the ambient sound directive, and/or the like. For example, the interface element may indicate, by way of text, image, etc., that the interface element represents a setting for an ambient sound directive. In at least one example embodiment, causing display relates to performance of an operation that results in the interface element being displayed. For example, causing display may comprise displaying the interface element on a display, sending information indicative of the interface element to a separate apparatus so that the separate apparatus displays the interface element, and/or the like.”; Displaying an interface element associated with an ambient sound directive reads on providing a display representative of the sampled information during the operation of the audio system.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified copending Application No. 18/090,975 to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
Regarding claim 4, copending Application 18,651,705 in view of Ozcan claims all the limitations set forth in the application claim 1. Claims 2 and 3 of copending Application No. 18,651,705 further claims the additional limitation of claim 4.
US Application No. 18/093,122 Claim 4
US Application No. 18,651,705 Claim 3
wherein the first content includes an ambient noise content,
wherein the other audio component includes a non-speech component
US Application No. 18/093,122 Claim 4
US Application No. 18,651,705 Claim 2
and the second content includes a speech content
wherein the selected audio component includes a speech component
Regarding claim 16, copending Application 18,651,705 in view of Ozcan claims all the limitations set forth in the application claim 1. Claim 14 of copending Application No. 18,651,705 further claims the additional limitation of claim 16.
US Application No. 18/093,122 Claim 16
US Application No. 18,651,705 Claim 14
wherein the multiple values in the range are discrete values
wherein the gain information is selected from a set of suggested gain values
Regarding claim 21, claims 9 – 10, 12 and 20 – 21 of copending Application No. 18,651,705 claims all the limitations set forth in the application claim 21 except for the limitations “the portable device including a non-transitory computer readable medium having stored thereon software instructions”, “the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface on a portable device in wireless communication with a sound-generating device”, “sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode”, and “providing a display representative of the sampled information during the operation of the audio processor”.
US Application No. 18/093,122 Claim 21
US Application No. 18,651,705 Claim 20
A system comprising:
An audio system comprising:
an audio device for providing an output sound to a user, and an audio processor configured to generate the output sound based on an audio signal;
an input circuit configured to receive an audio signal; and an audio processor including a splitter configured to separate the audio signal into a first audio component and a second audio component,
and a portable device configured to communicate with the audio device, the portable device including an application that allows the user to monitor the operation of the audio processor,
and a portable device configured to communicate with the audio device, the portable device including an application that allows the user to monitor the operation of the audio processor,
determining a sound suppression mode being implemented in the audio processor,
and a gain circuit configured to provide a gain for each of the first and second audio components to result in a respective gain adjusted audio component,
generating a control output signal based on the selected mixture value,
and a gain circuit configured to provide a gain for each of the first and second audio components to result in a respective gain adjusted audio component,
and processing the input signal based on the control output signal to generate the output signal representative of a sound having the first content and/or the second content according to the selected mixture value
the audio processor further including a combiner configured to combine the first and second gain adjusted audio components to provide a processed audio signal, the gains of the first and second audio components configured so that a selected one of the first and second audio components has improved intelligibility by a listener when the processed audio signal is converted into sound.
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 9
determining a sound suppression mode including obtaining a mixture value in a range from a first value for a first state to a second value for a second state,
wherein the providing of the gain for each of the first and second audio components includes providing suppression or no suppression of the respective audio component
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 10
the first state corresponding to a desired sound having substantially all of a first content and substantially nil amount of a second content, the second state corresponding to a desired sound having substantially nil amount of the first content and substantially all of the second content, the mixture value being a selected one among multiple values in the range, the multiple values including an unprocessed mixture value for an unprocessed state corresponding to a desired sound having unprocessed first and second contents;
wherein the providing of the suppression includes a suppression range such that the suppressed audio component has a level in a range between first and second levels, the first level being less than a level associated with no suppression, the second level being greater than or equal to a level associated with complete suppression.
US Application No. 18/093,122 Claim 1
US Application No. 18,651,705 Claim 12
the obtaining of the mixture value including obtaining an input through a [sliding scale] graphic user interface;
wherein the providing of the gain for each of the first and second audio components includes receiving gain information from a user interface.
US Application No. 18/093,122 Claim 21
US Application No. 18,651,705 Claim 21
an audio device including a speaker
further comprising a speaker configured to provide the sound based on the processed audio signal
Ozcan teaches:
the portable device including a non-transitory computer readable medium having stored thereon software instructions (Column 1, lines 29-32, “One or more embodiments may provide an apparatus, a computer readable medium, a non-transitory computer readable medium, a computer program product, and a method for receiving audio information”; Column 28, lines 10-12, “In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.”);
the obtaining of the mixture value including obtaining an input through a sliding scale graphic user interface on a portable device in wireless communication with a sound-generating device (Column 7, lines 3-8, “Electronic apparatus 10 may be a portable digital assistant (PDAs), a pager, a mobile computer, a desktop computer, a television, a gaming apparatus, a laptop computer, a media player, a camera, a video recorder, a mobile phone, a global positioning system (GPS) apparatus, and/or any other types of electronic systems.”; Column 7, lines 43-47, “The electronic apparatus 10 may further comprise a communication device 15. In at least one example embodiment, communication device 15 comprises an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter and/or a receiver.”; Column 8, lines 37-46, “The electronic apparatus 10 may comprise a user interface for providing output and/or receiving input. The electronic apparatus 10 may comprise an output device 14. Output device 14 may comprise an audio output device, such as a ringer, an earphone, a speaker, and/or the like. Output device 14 may comprise a tactile output device, such as a vibration transducer, an electronically deformable surface, an electronically deformable structure, and/or the like. Output Device 14 may comprise a visual output device, such as a display, a light, and/or the like.”; Column 26, line 46 - Column 27, line 8, "In some circumstances, if may be desirable to provide a simple and intuitive manner in which a user may vary magnitude of ambient audio information, magnitude of speech audio information, and/or the like, which is provided by ambient sound processed audio information. In at least one example embodiment, the apparatus provides a slider interface element that allows a user to set and/or modify an ambient sound directive. In at least one example embodiment, the apparatus may cause display of a slider interface element associated with the ambient sound directive. In such an example, the apparatus may receive an indication of an input indicative of the magnitude. For example, the input indicative of the magnitude may relate to an input indicative of a position on the slider interface element. In such an example, the position may be indicative of the magnitude. In at least one example embodiment, the slider interface element comprises a slider endpoint associated with speech audio information and a slider endpoint associated with ambient audio information. In such an example, in circumstances where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, such that the position on the slider interface element indicates the proportion. For example, the position on the slider interface element may indicate the proportion such that the proportion relates to a factor indicative of a distance from the position to at least one slider endpoint."; A slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information, reads on obtaining the mixture value through a sliding scale graphic user interface.);
sampling information representative of an input signal and an output signal resulting from processing of the input signal in the sound suppression mode (Column 11, lines 28-30, “In at least one example embodiment, apparatus 201 receives audio information from microphone 202.”; Column 11, lines 39-51, “The basic idea of estimating the ambient sound may be to analyze input signal frames during periods associated with lack of speech activity. For example, it may be determined whether a current frame contains speech and/or ambient sound. In such an example, the output of the VAD may be desirable. In at least one example embodiment, based on the VAD information, ambient sound may be suppressed for quality and intelligibility of speech signal. In some apparatuses with multi-microphone configurations it may be desirable to provide an advanced level of noise suppression or directionality. For example, it may be desirable that uplink audio emphasizes speech by reducing ambient sound.”; Analyzing input signal frames reads on sampling information representative of an input signal, and utilizing the voice activity detection output to suppress ambient sound reads on sampling information representative of an output signal resulting from processing of the input signal in the sound suppression mode.);
providing a display representative of the sampled information during the operation of the audio processor (Column 19, lines 4-24, “In at least one example embodiment, the apparatus causes display of an interface element associated with an ambient sound directive. In at least one example embodiment, an interface element relates to a visual representation of information that is indicative of information with which a user may interact. For example, an interface element may be an icon, a button, a hyperlink, text, and/or the like. In at least one example embodiment, the interface element may be associated with an ambient sound directive by indicating the ambient sound directive, by way of being associated with invocation of the ambient sound directive, and/or the like. For example, the interface element may indicate, by way of text, image, etc., that the interface element represents a setting for an ambient sound directive. In at least one example embodiment, causing display relates to performance of an operation that results in the interface element being displayed. For example, causing display may comprise displaying the interface element on a display, sending information indicative of the interface element to a separate apparatus so that the separate apparatus displays the interface element, and/or the like.”; Displaying an interface element associated with an ambient sound directive reads on providing a display representative of the sampled information during the operation of the audio system.).
Ozcan is considered to be analogous to the claimed invention because it is in the same field of sound suppression. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified copending Application No. 18/090,975 to incorporate the teachings of Ozcan to implement a slider interface element, where the magnitude of ambient audio information in relation to speech audio information may relate to a scaling factor that indicates the magnitude as a proportion of ambient sound processed audio to allocate to the speech audio information and ambient sound information. Doing so would allow for limiting ambient sound in processed audio information (Ozcan; Column 12, lines 20-36).
This is a provisional nonstatutory double patenting rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Boggs whose telephone number is (571)272-2968. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES BOGGS/Examiner, Art Unit 2657