Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 4, 6-18 rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty: 20220319528 hereinafter Cha further in view of Sakai: 20180352193 hereinafter Sak and further in view of Wige: 20160329063 hereinafter Wi.
Regarding claim 1
Cha teaches:
A method for providing a shared ambiance audio in a virtual meeting (Cha: Abstract; ¶ 5: selective noise cancellation which weights the noise cancelled), the method comprising:
receiving audio signals from a set of audio inputs corresponding to a plurality of participants in a virtual meeting, each audio signal comprising a voice component and a background noise component (Cha: ¶ 41-43, 51-54, 71, etc.; Fig 1A, 2, 3, 6: receipt of a signal comprising at least a plurality of voices, noises from one or more electronic devices wherein the system operates to assign volume weights to components of a received signal based at least upon contexts thereof; any signal acquired by a microphone may be generally considered to comprise a signal, voice, etc. portion or voice and a noise, unwanted, ambient, portion);
isolating the background noise component from the voice component for each received audio signal (Cha: ¶ 41-43, 51-54, 56, 71, 90, etc.; Fig 2, 3, 6, 9: system determines weights for desired, voice type components, and undesired noise-type components of one or more input signals);
determining an ambiance score for each isolated background noise component (Cha: Abstract; ¶ 41-43, 51-54, 56, 58, 74-76, etc.; Fig 4A, 4B: system determines a weighting, score, etc. for one or more noise portions and based on a plurality of parameters including weighting selectively suppressing or including each of the one or more determined noise portions) , wherein each ambiance score comprises a score indicating suitability of the respective isolated background noise component for suppression or inclusion (Cha: ¶ 74; Figs 4A, 4B: the weighting identifies the noise, exposing same to user preferences for suppression or inclusion);
selecting a particular background noise component from the isolated background noise components, wherein selecting the particular background noise component from the isolated background noise components is based on the determined ambiance scores (Cha: ¶ 74; Figs 4A, 4B: the weighting identifies the noise, exposing same to user preferences for suppression or inclusion); and
transmitting the particular background noise component to a set of audio outputs corresponding to the plurality of participants in order to provide a shared ambiance audio for the plurality of participants in the virtual meeting.
Cha strongly suggests a conference or meeting as the system operates to combine selected ambient signals with one or more voices of a meeting call (Cha: ¶ 5, 61) but discusses the taught system and method with respect to a phone call, that is, with respect to an interaction between two user devices and as such is not considered to encompass a broadly reasonable construal of the recited plurality of participants in a virtual meeting in the manner claimed nor does Cha explicitly teach inclusion of noise explicitly determined to be of use for shared ambiance in the meeting.
In a related field of endeavor Sak teaches a system and method for conducting a virtual meeting providing a shared ambiance audio (Sak: Abstract; ¶ 2; Fig 1: a telepresence system for conducting a virtual meeting for a plurality of users each operative of a user device (Sak: ¶ 31-39: fig 1, 6)
comprising: a plurality of audio input devices (Sak: ¶ 42; Fig 1, 2: each/any of the communication control apparati 10, operable of an input unit comprising one or more microphones); a plurality of audio output devices (Sak: ¶ 12, 58-67, 67; Fig 1, 2: each/any of the communication control apparati 10, operable of an output unit comprising one or more speakers); a processor (Sak: ¶ 41-43, 57, etc.: Fig 2: space information processing and generation units operate to conduct virtual meeting comprising shared ambiance in concert processing instructions such as in storage unit 110, 112, 113, etc.); and a hardware storage device storing computer-executable instructions that are executable by the processor (Sak: ¶ 12, 41-43, 57, etc.: Fig 2: space information processing and generation units operate to conduct virtual meeting comprising shared ambiance in concert processing instructions such as in storage unit 110) to cause the computing system to:
receive audio signals from the plurality of audio inputs corresponding to a plurality of participants in a virtual meeting, each audio signal comprising a voice component and a background noise component (Sak: ¶ 34, 49-53, 59, 65; Fig 4-6: system operable to manage volume levels for speech of each/any user, noise ambient to each/any user, and a shared environmental sound);
and transmit the speech, background noise, and ambiance components to the plurality of audio outputs corresponding to the plurality of participants in order to provide a shared ambiance audio for the plurality of participants in the virtual meeting (such as by transmission of the particular sounds at particular volume levels based on user operations of the interface (Sak: ¶ 34, 49-53, 59, 65; Fig 4-6).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to enhance the Cha system and method by adapting the system to include interface and parameter control over the inclusion of ambiance in the manner taught or suggested by Sak for at least the purpose of providing greater control over the presentation of ambient sounds to users of a teleconference system; one of ordinary skill in the art would have expected only predictable results therefrom.
Cha in view of Sak strongly suggests but does not explicitly teach inclusion of noise explicitly determined to be of use for shared ambiance in a meeting, conference, etc.
In a related field of endeavor Wi teaches a system and method for rendering online meetings in concert with ambient components, parameters, etc. of a user microphone signal for localization of each/any user of the meeting (Wi: Abstract; ¶ 35-37, 44, etc.; Fig 2: system decomposes ambiance into sub components representing diffusivity, localization, etc. information of each/any user microphone input and includes portions thereof into a meeting sound). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to enhance the Cha in view of Sak system and method by adapting the system to determine and provide the Wi determined ambiance parameters for control over the inclusion of ambiance sufficient to users, attendees, etc. of the Cha in view of Sak meeting, conference, etc. to thereby provide location, spatialization, etc. cues in the manner taught or suggested by Wi for at least the purpose of providing greater control over the presentation of ambient sounds to users of a teleconference system such as to give the teleconference spatial depth, to contextualize communications within the conference, etc.; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 3
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 2, wherein selecting a particular background noise component from the isolated background noise components further comprises: identifying a highest ambiance score from the determined ambiance scores; and selecting the particular background noise component with the highest ambiance score (Cha: ¶ 63, 93, etc.; Fig 7, etc.: in Cha the highest scores are assigned and when a highest score is detected the sound is available for inclusion; such as by detecting as in figure 7 that the highest weighted of the ambient sounds is location information or “Music” which is also above a threshold and the system thereby operates to include the “Music” sounds). While Cha in view of Wi does not explicitly teach the selecting based on a highest ambiance score Examiner has taken official notice which Applicant has failed to timely and explicitly traverse and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that selection among scored members of a set based on a highest score would have comprised an obvious inclusion for at least the purpose of determining a preferential member of the set with respect to the score value; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 4
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 1, wherein determining the ambiance score for each isolated background noise component further comprises: analyzing one or more attributes of each isolated background noise component; and determining the ambiance score for each isolated background noise component is based on analyzing the one or more attributes of each isolated background noise component (Cha: ¶ 14, 41-43, 51-54, 71, 90, etc.; Fig 2, 3, 6, 9: such as by increase or decrease of the weights such as by user analysis, determination, operation of a volume slider, etc.) ; (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by adjustment of the speech, ambient sound and/or shared ambiance with respect to each/any user); (Wi: Abstract; ¶ 35-37, 44, etc.; Fig 2: such as by adjustment of the speech, shared ambiance parameters, etc. with respect to each/any user). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 6
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 1, further comprising: for at least one participant, while transmitting the particular background noise component to the set of audio inputs, mixing the particular background noise component with the background noise component corresponding to the at least one participant (Cha: ¶ 5, 14, 41-43, 51-54, 71, 90, etc.; Fig 2, 3, 6, 9: a media file comprising voice and desired ambient noise is created, transmitted to a conversation, meeting, etc. partner(s) for output); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by adjustment , mixing etc. of each of the speech, ambient sound and/or shared ambiance with respect to distance settings); (Wi: Abstract; ¶ 35-37, 44, etc.; Fig 2: system decomposes ambiance into sub components representing diffusivity and localization information of each/any user microphone input and includes portions thereof into a meeting sound). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 7
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 1, further comprising: modifying the selected particular background noise component; determining a modified ambiance score for the modified particular background noise component; determining that the modified ambiance score for the modified particular background noise component is higher than a previous ambiance score for the particular background noise component; and transmitting the modified particular background noise component to the set of audio inputs (Cha: ¶ 71, 88-95; Figs 3, 6, 7: such as by user indication of an enabled, on, one, asserted, etc. value for music and dog at step 605, siren and dog at stem 608); (Sak: ¶ 2, 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by adjustment of the speech, ambient sound and/or shared ambiance with respect to a change from a first distance setting to a second distance setting on the part of a participant in the video conference); (Wi: Abstract; ¶ 35-37, 44, etc.; Fig 2: system decomposes ambiance into sub components representing diffusivity and localization information of each/any user microphone input and includes portions thereof into a meeting sound such as based on subcomponents determined iteratively over time). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 8
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 7, wherein modifying the selected background noise component comprises enhancing the selected background noise component to suppress some background noises in the selected background noise component while amplifying other background noises in the selected background noise component (Cha: ¶ 71, 88-95; Figs 3, 6, 7: such as by user indication of an enabled, on, one, asserted, etc. value for music and dog at step 605, siren and dog at stem 608); (Sak: ¶ 2, 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by adjustment of the speech, ambient sound and/or shared ambiance with respect to a change from a first distance setting to a second distance setting on the part of a participant in the video conference); (Wi: Abstract; ¶ 35-37, 44, etc.; Fig 2: system decomposes ambiance into sub components representing diffusivity and localization information of each/any user microphone input and includes portions thereof into a meeting sound). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 9
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 7, wherein modifying the selected background noise component comprises augmenting the selected background noise component with background noises not originally included in an audio signal corresponding to the selected background noise component (Cha: ¶ 71, 88-95; Figs 3, 6, 7: such as by user indication of an enabled, on, one, asserted, etc. value for music and dog at step 605, siren and dog at stem 608). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 10
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 1, further comprising: determining a weighting scheme for the isolated background noise components; applying the weighting scheme to the isolated background noise components; combining the isolated background noise components based on the weighting scheme; generating a mixed background noise component based on combining the isolated background noise components; and transmitting the mixed background noise component to the set of audio outputs (Cha: ¶ 71, 88-95; Figs 3, 6, 7: system weights, reweights, etc. background sounds, such as based on a current context, mixes particular sounds based on weights applied thereto, transmits same to second, etc. user for output) (Sak: ¶ 2, 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: system weights, reweights, etc. voice, background sounds, and shared ambiance, such as based on a distance parameter, mixes particular sounds based on weights applied thereto with respect to distance, transmits same to other users of the conference system , etc. user for output). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 11—the claim is considered to recite substantially similar subject matter to that of claim 1 supra and is similarly rejected.
Regarding claim 12
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 11, wherein the computer-executable instructions are further executable by the processor to further cause the computing system to:
transmit a first voice component corresponding to a first participant to the plurality of audio outputs while transmitting the particular background noise component (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: system transmits first user voice and determined, selected background sounds of the first user); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: system transmits first user voice, first user background sounds and shared ambiance to additional users); identify a new voice component from a second participant (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as by determine local weightings for a second user of a connected call); (Sak: ¶ 34, 49-53, 59, 65-71, etc.; Fig 4-9, etc.: such as by affording each/any user the option to adjust distance parameters with each/any other user in the conference); and switch to transmitting the new voice component from the second participant without modifying a transmission of the particular background noise component (Sak: ¶ 34, 49-53, 59, 65-71, etc.; Fig 4-9, etc.: such as by allowing each/any user to persist their own preferential settings when additional users connect or disconnect or otherwise maintaining the shared ambiance). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include variations on the disclosed parameters such as by experimenting with a singular user being coordinator of the overall conference settings with respect to a desired context for the call, as well as the voice and noise components acceptable therein for at least the purpose of allowing managerial supervision, enhancing a particular mood, etc.; one of ordinary skill in the art would have been motivated to do so in the course of routine experimentation and would have expected only predictable results therefrom.
Regarding claim 13
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 11, further comprising: a user interface configured to receive user input for modifying one or more of the isolated background noise components (Cha: ¶ 5, 14, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: user input provides adjustment of an enable/disable parameters, preferred weights, etc.); (Sak: ¶ 34, 41 49-53, 59, 65-71, etc.; Fig 4-9, etc.: user operation of a distance parameter on an operation interface). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 14
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 11, wherein the computer-executable instructions are further executable by the processor to further cause the computing system to: receive user input for modifying the particular background noise component; and modify the particular background noise component based on the user input (Cha: ¶ 5, 14, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: user input provides adjustment of an enable/disable parameters, preferred weights, etc.); (Sak: ¶ 34, 41 49-53, 59, 65-71, etc.; Fig 4-9, etc.: user operation of a distance parameter on an operation interface). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 15
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 14, wherein the computer-executable instructions are further executable by the processor to further cause the computing system to: access a machine learning model trained to determine ambiance scores for background noises (Cha: ¶ 25, 48, 66-68, etc.: background sounds determined, adjusted, etc. based on an AI model);
generate training data comprising the user input for modifying the particular background noise component (Cha: ¶ 25, 48, 66-68, etc.: system trained based on determinations or predictions); and train the machine learning model on the training data to update one or more parameters of the machine learning model based on the user input (Cha: ¶ 25, 48, 66-68, etc.: system trained based on determinations or predictions).
Examiner has taken official notice which Applicant has failed to timely and explicitly traverse and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that the generation of training data based on the operation of the model upon user data and the training of a machine learning model based on training data generated by user operations upon parameters of a model would have comprised an obvious inclusion such as for the operation of the system in concert with well-established machine learning methods to tune the system to a user or community of users with respect to the parameters of operation thereof; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 16
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 15, wherein the computing system selects the machine learning model for determining ambiance scores based on a desired context of the virtual meeting such that an ambiance score for each isolated background noise component is determined based on the desired context of the virtual meeting (Cha: ¶ 5, 14, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: user input provides adjustment of an enable/disable parameters, preferred weights, etc.); (Sak: ¶ 34, 41 49-53, 59, 65-71, etc.; Fig 4-9, etc.: user operation of a distance parameter on an operation interface). Examiner has taken official notice which Applicant has failed to timely and explicitly traverse and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that the generation of training data based on the operation of the model upon user data and the training of a machine learning model based on training data generated by user operations upon parameters of a model would have comprised an obvious inclusion such as for the operation of the system in concert with well-established machine learning methods to tune the system to a user or community of users with respect to the parameters of operation thereof; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 17
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 15, wherein the computing system further determines an ambiance score for each isolated background noise component by: analyzing one or more attributes of each isolated background noise component (Cha: ¶ 5, 14, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: user input provides adjustment of an enable/disable parameters, preferred weights, etc.); and determining an ambiance score for each isolated background noise component based on analyzing the one or more attributes of each isolated background noise component (Cha: ¶ 5, 14, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as by analyzing the various scores over the duration of a call). The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 18
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 16, wherein the computing systems selects the particular background noise component from the isolated background noise components by: identifying a highest ambiance score from the determined ambiance scores; and selecting the particular background noise component with the highest ambiance score (Cha: ¶ 63, 93, etc.; Fig 7, etc.: in Cha the highest scores are assigned and when a highest score is detected the sound is available for inclusion; such as by detecting as in figure 7 that the highest weighted of the ambient sounds is location information or “Music” which is also above a threshold and the system thereby operates to include the “Music” sounds). Please see claims 3, 15, 16, supra. The claim is considered obvious over Cha as modified by Sak and Wi as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, and/or Wi to the modified device of Cha, Sak, and Wi; one of ordinary skill in the art would have expected only predictable results therefrom.
Claims 5 rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty: 20220319528 hereinafter Cha further in view of Sakai: 20180352193 hereinafter Sak and further in view of Wige: 20160329063 hereinafter Wi as applied to claims 1-4 supra and further in view of Ma: 20170103771.
Regarding claim 5
Cha in view of Sak in view of Wi teaches or suggests:
The method of claim 4, wherein one or more attributes of each of the isolated background noise components comprises: volume stability (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by adjustment of the speech, ambient sound and/or shared ambiance with respect to distance settings), or sound consistency; Cha in view of Sak in view of Wi does not explicitly teach a system wherein volume stability comprises a degree to which volume of the isolated background noise component remains substantially constant over time, and wherein sound consistency comprises a degree of abrupt changes in frequency or amplitude of the isolated background noise component.
In a related field of endeavor Ma teaches a system and method for noise level estimation based on a degree to which volume of background noise components remains substantially constant over time, and wherein sound consistency comprises a degree of abrupt changes in frequency or amplitude of the isolated background noise component (Ma: Abstract; ¶ 3, 4) comprising determining and/or maintaining volume stability attributes of spatial background noises and overall signal by modifying noise and voice signals based on a probability of temporal stability such as by determining confidence levels of the likelihood of abrupt changes within a noise signal and operable to thereby perform smoothing such as on a particular frequency band based on a confidence threshold of temporal stability (Ma: Abstract; ¶ 3, 4, 21, 28, 51; Fig 5a: such as by tracking and/or predicting abrupt changes to the noise floor, or likelihood of an impulsive noise). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to adapt the Cha, Sak and Wi device and method to include the Ma taught method for predicatively determining a degree to which volume of a background noise, components thereof may be expected to change for at least the purpose of control the system to maintain a comfortable ambience level in the presence of abrupt changes to the noise floor and or impulsive noise; one of ordinary skill in the art would have expected only predictable results therefrom.
Claims 19, 20 rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty: 20220319528 hereinafter Cha further in view of Sakai: 20180352193 hereinafter Sak and further in view of Wige: 20160329063 hereinafter Wi as applied to claims 1-4, 6-18 supra and further in view of Redmann: 20070140510 hereinafter Red.
Regarding claim 19
Cha in view of Sak in view of Wi teaches or suggests:
The computing system of claim 11, wherein the selected particular background noise component is a first selected background noise component, the computer-executable instructions being further executable by the processor to cause the computing system to:
instead of transmitting the selected particular background noise component to the plurality of audio outputs, transmit the first selected background noise component to a first audio output corresponding to a first participant in the virtual meeting (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as by the use of a background sounds user interface by a first, second, etc. user to control delivered audio to include a first background sound); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by inclusion of an affordance for each any user of the Sak system to operate background sound controls such as those of Cha to generate a close-up type of sound including a particular background sound and no others);
select a second background noise component and transmit the second selected background noise component to a second audio output corresponding to a second participant in the virtual meeting, (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as by making an additional selection of a background sound for an additional user that includes a second background sound); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 4-6, etc.: such as by inclusion of an affordance for each any user of the Sak system to operate background sound controls such as those of Cha to generate a close-up type of sound including plural particular background sounds); wherein while background ambiance experience for the first participant is different from the second participant, the computing system maintains a continuous and personalized background ambiance experience for the first participant and second participant (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as with respect to separate selections by a first, second, etc. user); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 1, 4-6, etc.: such as to deliver audio by the Cha in view of Sak system and method to the plurality of users depicted in figure 1).
Cha, Sak, and/or Wi do not explicitly teach the system operable to maintain a continuous and personalized background ambiance experience for the first participant and second participant as different participants contribute new voice components during the virtual meeting.
In a related field of endeavor Red teaches a system and method for remote telepresence among a plurality of users in a shared virtual environment (Red: Abstract; Figs 5-7) wherein each among a plurality of users within a virtual meeting, conversation, collaboration, etc. comprise user preferences, settings, etc. for the delivery of local data, such as a loopback of audio, ambiance, etc. and the user preferences, etc. further comprise settings for audio received from other members of the meeting, etc. such that the duration of the meeting is conducted for each/any of the participants with respect to the particular preferences of the participant (Red: Abstract; ¶ 66-68, 80-87; Fig 1, 3: each/any participant may control delay and volume of local channels as well as the input channels of other participants, the settings of the user persist with regard to the duration of the meeting). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify and personalize the control of media delivery to each/any user of the Cha in view of Sak system to include controls relative to each/any additional member of a meeting such as by inclusion of control affordances similar to that of Red for each/any participant and for at least the purpose of allowing for more clear, meaningful, and contextual communication among a plurality of users by allowing users to determine and assign parameters of delivery to the plural members of a meeting; one of ordinary skill in the art would have expected only predictable results therefrom.
Regarding claim 20
Cha in view of Sak in view of Wi in view of Red teaches or suggests:
The computing system of claim 19, wherein the computer-executable instructions are further executable by the processor to further cause the computing system to: determine a personalized ambiance score for each participant in the virtual meeting and select the first selected background noise component and second selected background noise component based on the personalized ambiance score for each participant in the virtual meeting (Cha: ¶ 5, 41-43, 51-54, 90, etc.; Fig 2, 3, 6, 9, etc.: such as by affordance of the ability to weight sounds to each/any of multiple participants in a conference, etc.); (Sak: ¶ 34, 49-53, 59, 65, etc.; Fig 1, 4-6, etc.: such as by allowing for persisted ambiance settings for each/any of the figure 1 participants with respect to other participants); (Red: Abstract; ¶ 66-68, 80-87; Fig 1, 3: such as using controls similar to that of the figures). The claim is considered obvious over Cha as modified by Sak, Wi, and Red as addressed in the base claim as it would have been obvious to apply the further teaching of Cha, Sak, Wi, and/or Red to the modified device of Cha, Sak, Wi, and Red; one of ordinary skill in the art would have expected only predictable results therefrom.
Response to Arguments
Applicant’s arguments in concert with claim amendments, see Remarks and Claims, filed 11/10/25, with respect to the rejection(s) of claim(s) 1-18 under 35 USC 103 over Chakraborty in view of Sakai; claim(s) 19, 20-18 under 35 USC 103 over Chakraborty in view of Sakai in view of Redmann have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chakraborty, Sakai and Wige; Chakraborty, Sakai, Wige and Ma; and/or Chakraborty, Sakai, Wige, and Redmann.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL C MCCORD whose telephone number is (571)270-3701. The examiner can normally be reached 730-630 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAUL C MCCORD/Primary Examiner, Art Unit 2692