DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The computer program product recited in claims 11-19 is interpreted as non-transitory item as disclosed in paragraph 0018 of the instant application specification.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 6, 11-12, 16, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chavez (US Patent No. 11,082,465).
Regarding claim 1, Chavez teaches a computer-implemented method (col. 23 ll. 4- col. 26 ll. 43), comprising:
monitoring, during a live web conference (col. 3 ll. 6-10, col. 21 ll. 23-30), audio and video data associated with participants of the live web conference, wherein the participants include at least a first participant and a second participant (col. 12 ll. 5-22, col. 14 ll. 16-25, col. 15 ll. 11-20, col. 16 ll. 22-42 monitoring and distributing audio and video);
analyzing the first participant's behavior to determine whether to classify the first participant's behavior as being indicative of sound-aware actions (col. 12 ll. 57-64, col. 16 ll. 43-col. 17 ll. 18, col. 17 ll. 54-col. 18 ll. 17, col. 18 ll. 57-59, col. 19 ll. 9-16, col. 19 ll. 62-col. 20 ll. 25, col. 20 ll. 42-48 analyzing audio and/or video determining speech or extraneous noise actions); and
presenting information, based on the analysis, on a first user device of the first participant regarding audio and/or video data output from the first user device (col. 12 ll. 28-41, col. 13 ll. 22-29, ll. 48-52, col. 15 ll. 50-58, col. 17 ll. 28-37, col. 18 ll. 23-25, ll. 45-48, col. 19 ll. 16-35, col. 20 ll. 31-38 presenting audio and/or video alert/ notification for mute for extraneous noise or unmute for speech) (col. 1 ll. 65-col. 7 ll. 18 for additional details).
Regarding claim 2, Chavez teaches wherein analyzing the first participant's behavior includes: analyzing, using natural language processing (NLP), statements made by the first participant, wherein a first uttered phrase of the statements is used to classify the first participant's behavior (col. 2 ll. 42-col. 3 ll. 5, col. 11 ll. 32-38, col. 12 ll. 59-65, col. 17 ll. 10-18).
Regarding claim 6, Chavez teaches wherein the information presented on the first user device indicates an amount of audio reduced by a predetermined background noise elimination algorithm (Figs. 6B item 305, Fig. 7B item 305, Fig. 9B item 305 100% reduction indication).
Regarding claim 11, Chavez teaches a computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable and/or executable by a computer to cause the computer (col. 23 ll. 4- col. 26 ll. 43), to:
monitor, by the computer, during a live web conference (col. 3 ll. 6-10, col. 21 ll. 23-30), audio and video data associated with participants of the live web conference, wherein the participants include at least a first participant and a second participant (col. 12 ll. 5-22, col. 14 ll. 16-25, col. 15 ll. 11-20, col. 16 ll. 22-42 monitoring and distributing audio and video);
analyze, by the computer, the first participant's behavior to determine whether to classify the first participant's behavior as being indicative of sound-aware actions (col. 12 ll. 57-64, col. 16 ll. 43-col. 17 ll. 18, col. 17 ll. 54-col. 18 ll. 17, col. 18 ll. 57-59, col. 19 ll. 9-16, col. 19 ll. 62-col. 20 ll. 25, col. 20 ll. 42-48 analyzing audio and/or video determining speech or extraneous noise actions); and
present information, by the computer, based on the analysis, on a first user device of the first participant regarding audio and/or video data output from the first user device (col. 12 ll. 28-41, col. 13 ll. 22-29, ll. 48-52, col. 15 ll. 50-58, col. 17 ll. 28-37, col. 18 ll. 23-25, ll. 45-48, col. 19 ll. 16-35, col. 20 ll. 31-38 presenting audio and/or video alert/ notification for mute for extraneous noise or unmute for speech) (col. 1 ll. 65-col. 7 ll. 18 for additional details).
Regarding claim 12, Chavez teaches wherein analyzing the first participant's behavior includes: analyzing, using natural language processing (NLP), statements made by the first participant, wherein a first uttered phrase of the statements is used to classify the first participant's behavior (col. 2 ll. 42-col. 3 ll. 5, col. 11 ll. 32-38, col. 12 ll. 59-65, col. 17 ll. 10-18).
Regarding claim 16, Chavez teaches wherein the information presented on the first user device indicates an amount of audio reduced by a predetermined background noise elimination algorithm (Figs. 6B item 305, Fig. 7B item 305, Fig. 9B item 305 100% reduction indication).
Regarding claim 20, Chavez teaches a system, comprising: a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured (col. 23 ll. 4- col. 26 ll. 43), to:
monitor during a live web conference (col. 3 ll. 6-10, col. 21 ll. 23-30), audio and video data associated with participants of the live web conference, wherein the participants include at least a first participant and a second participant (col. 12 ll. 5-22, col. 14 ll. 16-25, col. 15 ll. 11-20, col. 16 ll. 22-42 monitoring and distributing audio and video);
analyze the first participant's behavior to determine whether to classify the first participant's behavior as being indicative of sound-aware actions (col. 12 ll. 57-64, col. 16 ll. 43-col. 17 ll. 18, col. 17 ll. 54-col. 18 ll. 17, col. 18 ll. 57-59, col. 19 ll. 9-16, col. 19 ll. 62-col. 20 ll. 25, col. 20 ll. 42-48 analyzing audio and/or video determining speech or extraneous noise actions); and
present information, based on the analysis, on a first user device of the first participant regarding audio and/or video data output from the first user device (col. 12 ll. 28-41, col. 13 ll. 22-29, ll. 48-52, col. 15 ll. 50-58, col. 17 ll. 28-37, col. 18 ll. 23-25, ll. 45-48, col. 19 ll. 16-35, col. 20 ll. 31-38 presenting audio and/or video alert/ notification for mute for extraneous noise or unmute for speech) (col. 1 ll. 65-col. 7 ll. 18 for additional details).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chavez as applied to claims 1, 11 above, and further in view of Anderson (US Patent No. 8,994,781).
Regarding claim 4, Chavez teaches wherein analyzing the first participant's behavior includes: analyzing metadata (as trained and stored in machine-learning) associated with the first participant, wherein the metadata is selected from the group consisting of: raw noise data (col. 16 ll. 66-col. 17 ll. 5, col. 14 ll. 56-59 based on decibels, col. 14 ll. 45-48 based on volume or duration), and processed noise data (col. 14 ll. 48-52 specifically processed for tapping of pen, office equipment, baby crying etc.), but Chavez does not teach including global positioning system (GPS) information.
However, in the similar field, Anderson teaches group consisting of: global positioning system (GPS) information (col. 8 ll. 28-49, col. 9 ll. 22-38 using GPS information to categorize intended and unintended audio), raw noise data (col. 8 ll. 36-38, col. 9 ll. 7-15 based on volume, duration and repetitiveness), and processed noise data (col. 8 ll. 42, col. 9 ll. 20-21 specific to keyboard sound, coll. 8 ll. 66 specific to paper moving, crowd noise,).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include using global positioning system (GPS) information as taught by Anderson in order to determine unintended sound (noise) based on “a location having a large amount of noise” or “location movement by the user” (Anderson, col. 9 ll. 35-38).
Regarding claim 14, Chavez teaches wherein analyzing the first participant's behavior includes: analyzing metadata (as trained and stored in machine-learning) associated with the first participant, wherein the metadata is selected from the group consisting of: raw noise data (col. 16 ll. 66-col. 17 ll. 5, col. 14 ll. 56-59 based on decibels, col. 14 ll. 45-48 based on volume or duration), and processed noise data (col. 14 ll. 48-52 specifically processed for tapping of pen, office equipment, baby crying etc.), but Chavez does not teach including global positioning system (GPS) information.
However, in the similar field, Anderson teaches group consisting of: global positioning system (GPS) information (col. 8 ll. 28-49, col. 9 ll. 22-38 using GPS information to categorize intended and unintended audio), raw noise data (col. 8 ll. 36-38, col. 9 ll. 7-15 based on volume, duration and repetitiveness), and processed noise data (col. 8 ll. 42, col. 9 ll. 20-21 specific to keyboard sound, coll. 8 ll. 66 specific to paper moving, crowd noise,).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include using global positioning system (GPS) information as taught by Anderson in order to determine unintended sound (noise) based on “a location having a large amount of noise” or “location movement by the user” (Anderson, col. 9 ll. 35-38).
Claims 5, 10, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Chavez as applied to claims 1, 11 above, and further in view of Chitre (US Patent No. 10,425,239).
Regarding claim 5, Chavez does not teach wherein analyzing the first participant's behavior includes: analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant.
However, in the similar field, Chitre teaches analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant (col. 8 ll. 34-col. 9 ll. 11 second participant performing control action for high/low volume of first participant, col. 5 ll. 42-62 analyzing using the tracked and stored participant feedback data).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant as taught by Chitre so that during “a first participant speaking during the audio conference 108, a second participant may specify (block 306), via the conference user interface 118, the perceived audio quality of the speaking first participant.” (Chitre, col. 6 ll. 58-62).
Regarding claim 10, Chavez does not teach indicating, on the first user device of the first participant, a recommendation for improving the audio and/or the video data output from the first user device.
However, in the similar field, Chitre teaches indicating, on the first user device of the first participant, a recommendation for improving the audio and/or the video data output from the first user device (col. 5 ll. 19-41).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include indicating, on the first user device of the first participant, a recommendation for improving the audio and/or the video data output from the first user device as taught by Chitre in order to “generate and present notifications to the participants with suggestions for self-correcting audio quality and/or audio connection issues” (Chitre, col. 3 ll. 18-20).
Regarding claim 15, Chavez does not teach wherein analyzing the first participant's behavior includes: analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant.
However, in the similar field, Chitre teaches analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant (col. 8 ll. 34-col. 9 ll. 11 second participant performing control action for high/low volume of first participant, col. 5 ll. 42-62 analyzing using the tracked and stored participant feedback data).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include analyzing behavior of at least the second participant within the live web conference, wherein the behavior of at least the second participant within the live web conference includes a volume control action performed on a second user device of the second participant with respect to the first participant as taught by Chitre so that during “a first participant speaking during the audio conference 108, a second participant may specify (block 306), via the conference user interface 118, the perceived audio quality of the speaking first participant.” (Chitre, col. 6 ll. 58-62).
Claims 7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chavez as applied to claims 1, 11 above, and further in view of Jouret (US Patent Application Publication No. 2015/0049156).
Regarding claim 7, Chavez does not teach wherein the information presented on the first user device includes a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference.
However, in the similar field, Jouret teaches the information presented on the first user device includes a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference (Figs. 5A-5E, Paragraphs 0010, 0031-0040, Abstract).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include the information presented on the first user device including a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference as taught by Jouret so that “use of the presenter image as feedback for minimizing the distorted view of a participant image is particularly effective as participants generally will be motivated to optimize the view of the presenter in order to obtain a better experience in consuming the content offered by the presenter of the web-based video conference” (Jouret, Paragraph 0040).
Regarding claim 17, Chavez does not teach wherein the information presented on the first user device includes a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference.
However, in the similar field, Jouret teaches the information presented on the first user device includes a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference (Figs. 5A-5E, Paragraphs 0010, 0031-0040, Abstract).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include the information presented on the first user device including a snippet of the audio and/or the video data output from the first user device to a second user device of the second participant during the live web conference as taught by Jouret so that “use of the presenter image as feedback for minimizing the distorted view of a participant image is particularly effective as participants generally will be motivated to optimize the view of the presenter in order to obtain a better experience in consuming the content offered by the presenter of the web-based video conference” (Jouret, Paragraph 0040).
Claims 8, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chavez as applied to claims 1, 11 above, and further in view of Kato (US Patent Application Publication No. 2012/0026279).
Regarding claim 8, Chavez does not teach the information presented on the first user device indicates a volume status of a second user device of the second participant regarding the first participant.
However, in the similar field, Kato teaches the information presented on the first user device indicates a volume status of a second user device of the second participant regarding the first participant (Fig. 33, Paragraphs 0335-0338 displaying “mute” i.e. 100% volume reduction status of counterpart terminal).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include the information presented on the first user device indicating a volume status of a second user device of the second participant regarding the first participant as taught by Kato so that “the user at the request terminal 10 is able to instantly know that the user at the counterpart terminal 10 is using the mute function through data generated based on the mute state data received from the counterpart terminal 10” (Kato, Paragraph 0339).
Regarding claim 18, Chavez does not teach the information presented on the first user device indicates a volume status of a second user device of the second participant regarding the first participant.
However, in the similar field, Kato teaches the information presented on the first user device indicates a volume status of a second user device of the second participant regarding the first participant (Fig. 33, Paragraphs 0335-0338 displaying “mute” i.e. 100% volume reduction status of counterpart terminal).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Chavez to include the information presented on the first user device indicating a volume status of a second user device of the second participant regarding the first participant as taught by Kato so that “the user at the request terminal 10 is able to instantly know that the user at the counterpart terminal 10 is using the mute function through data generated based on the mute state data received from the counterpart terminal 10” (Kato, Paragraph 0339).
Allowable Subject Matter
Claims 3, 9, 13, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The above objection(s) is (are) based on the claim(s) as presently set forth in its (their) totality. It should not be interpreted as indicating that amended claim(s) broadly reciting certain limitations would be allowable. A more detailed reason(s) for allowance may be set forth in a subsequent Notice of Allowance if and when all claims in the application are put into a condition for allowance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEMANT PATEL whose telephone number is (571)272-8620. The examiner can normally be reached M-F 8:00 AM - 4:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached at 571-272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HEMANT PATEL
Primary Examiner
Art Unit 2694
/HEMANT S PATEL/ Primary Examiner, Art Unit 2694