DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC §103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Anantharaju (Pub. No.: US 2022/0239698 A1; hereinafter Anantharaju) in view of Varerkar et al (Pub. No.: US 2019/00147175 A1; hereinafter Varerkar)
Consider claims 1, 9, and 17, Anantharaju clearly shows and discloses a system, a non-transitory computer-readable medium, and a method comprising: receiving, during a virtual conference hosted by a virtual conference provider, one or more audio or video streams from one or more client devices connected to the virtual conference, each client device associated with a participant attending the virtual conference (the present disclosure relates to virtual meetings. In particular, the present disclosure relates to securing the systems and physical environments used to conduct virtual meetings, Meetings conducted by audio and/or audio-visual communication applications) (paragraphs: 0001-0003, and abstract); providing, to a trained machine learning ("ML") model, the received one or more audio or video streams to determine a potential security intrusion (a high level the machine learning application 146 may use machine learning to detect user behaviors that indicate that a user is attempting to duplicate audio or visual data presented in the secure communication session) (paragraphs: 0059 and 0070); in response to receiving an indication of a potential security intrusion from the trained ML model: generating an indication of the potential security intrusion (Upon identifying a violation of a security policy, the ML engine 146 may, via the security system 134, notify participants of the violation) (paragraphs: 0060-0061, and 0111); and providing the indication to one or more client devices of the one or more client devices (the system may notify participants of a restricted operation (operation 436). For example, the system may maintain some operation of the secure communication channel and notify participants with an audio signal or a visual signal (e.g. a splash screen) that alerts the participants of the restricted operation) (paragraphs: 0061 and 0114) however, Anantharaju does not specifically disclose another example for providing the indication to one or more client devices of the one or more client devices.
In the same field of endeavor, Varerkar clearly specifically disclose another example for providing the indication to one or more client devices of the one or more client devices (the intruder indicator indicates that an intruder has been detected in a physical proximity of either the local device or a remote device in communication with the local device) (paragraph: 0076).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to incorporate the teaching of Varerkar into teaching of Anantharaju for the purpose of providing more example for providing the indication to one or more client devices.
Consider claims 2, 10, and 18, Anantharaju and Varerkar clearly show the system, non-transitory computer-readable medium, and the method, wherein the potential security intrusion is a presence of a potential unauthorized participant (Anantharaju: paragraphs: 0070-0071; Varerkar: paragraph 0029).
Consider claims 3, 11, and 19, Anantharaju and Varerkar clearly show the system, non-transitory computer-readable medium, and the method, further comprising: recognizing, using the trained ML model, a first participant visible in a first video stream of the one or more video streams; and determining the first participant is authorized to attend the virtual conference (Anantharaju: paragraphs: 0070-0071; Varerkar: paragraph 0044, 0064).
Consider claims 4, 12, and 20, Anantharaju and Varerkar clearly show the system, non-transitory computer-readable medium, and the method, further comprising: recognizing, using the trained ML model, a second participant visible in the first video stream; determining the second participant is not authorized to attend the virtual conference (Anantharaju: paragraphs: 0070-0071; Varerkar: paragraph 000064).
Consider claims 5, and 13, Anantharaju and Varerkar clearly show the system, and the method, wherein recognizing the second participant visible in the first video stream comprises determining a second person is visible in the first video stream and failing to determine an identity of the second person (Anantharaju: paragraphs: 0070-0071; Varerkar: paragraph 0064).
Consider claim 6, Anantharaju and Varerkar clearly show the method, further comprising: recognizing, using the trained ML model, a first participant audible in a first audio stream of the one or more audio streams; and determining the first participant is not authorized to attend the virtual conference (Anantharaju: paragraph 0064; Varerkar: paragraph 0064).
Consider claims 7, and 15, Anantharaju and Varerkar clearly show the system, and the method, wherein the receiving and the providing are performed by a first client device of the one or more client devices, and further comprising: responsive to receiving an indication that the virtual conference is a secure virtual conference: disabling, by the first client device, a virtual background based on the indication; and determining that a camera and a microphone connected to the first client device are pre-authorized to provide video and audio streams, respectively, to the virtual conference (Varerkar: paragraph 0059).
Consider claims 8 and 16, Anantharaju and Varerkar clearly show the system, and the method, wherein providing the received one or more audio or video streams comprises transmitting the received one or more audio or video streams to a remote computing device to input into the trained ML model (Anantharaju: paragraph 0088).
Consider claim 14, Anantharaju and Varerkar clearly show the system, wherein the one or more processors are configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: obtain location information from a sensor associated with the client device; and determine a potential security intrusion based on the location information (Varerkar: paragraph 0054).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Amal Zenati whose telephone number is 571-270-1947. The examiner can normally be reached on 8:00 -5:00 M-F.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571- 272- 7488. The fax phone number for the organization where this application or proceeding is assigned is 571- 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/AMAL S ZENATI/Primary Examiner, Art Unit 2693