DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/16/2026 has been entered.
Information Disclosure Statement
The information disclosure statements (“IDS”) filed on 12/05/2025 and 01/16/2026 have been reviewed and the listed references have been considered.
Status of Claims
Claims 1-20 are pending. Claims 1, 8 and 15 are amended.
Response to Arguments
Applicant's arguments filed on January 16, 2026 with respect to rejection of claims under 35 U.S.C. 103 has been fully considered; but they are not found persuasive. Specifically, in page 11 of its reply, Applicant argues in first paragraph that Zajac is explicit that the call is a live call and not a recording of an earlier call. Examiner respectfully disagrees. Zajac teaches both live and recorded call in ¶0022: “analyzes a received data feed (for example, voice call, live or recorded video feed including a video emergency call, text message, and the like”. Therefore, applicant’s arguments are not found persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 6, 8, 15, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Verma (US 2018/0253954 A1), in view of Bodbyl et al. (US 2021/0012115 A1) and in further view of Zajac et al. (US 2024/0048655 A1).
Regarding claim 1, Verma teaches, A method comprising: determining, by a video conference provider, (Verma, ¶0013: “The intelligent digital client assistant of the invention has plurality of built in function with at least a digital camera, and a HDMI connector for video conference”) a plurality of client devices associated (Verma, ¶0011: “a server which handles secure communication with all the connected devices”) with an incident identification system; (Verma, ¶0027: “AI algorithms to detect fire, danger scenes like lake, canal etc.”; applicant’s specification ¶0110: Upon identification of an incident event, such as a fire) (Verma, ¶0009: “algorithms for face, age, emotion, movement, detection on the real time video, to monitor client”) receiving, by the video conference provider, at least one multimedia stream from each client device of the plurality of client devices; (Verma, ¶0030: “streams video as well as GPS location information of the client to the drone pilot operator”) monitoring, by the video conference provider, the received multimedia streams for one or more incident factors; (Verma, ¶0009: “monitor client 24/7, and to compare the video with pre stored images for hazards like fire”; incident factor is interpreted as a fire in the monitored scene) identifying, by the video conference provider, a first incident factor in a first multimedia stream; (Verma, ¶0014: “The intelligent digital camera of this invention allows monitoring of a patient or client activities… vicinity to hazards like fire”). However, Verma does not explicitly teach, joining, by the video conference provider, one or more of the plurality of client devices and identifying, by the video conference provider, a second incident factor in a second multimedia stream; determining, by the video conference provider, a correlation between the first and second incident factors; determining, by the video conference provider, a location of an incident event based on the correlation; generating, by the video conference provider, the incident alert for an incident event based on determining a correlation between the incident factors; receiving, by the video conference provider from a client device, a request to view a recording corresponding to the first or second incident factors; and transmitting, by the video conference provider to the client device, a recording of the first or second multimedia stream corresponding to the requested first or second incident factor.
In an analogous field of endeavor, Bodbyl teaches, joining, by the video conference provider, one or more of the plurality of client devices (Bodbyl, ¶0061: “application server may provide access to an interface (e.g., similar to interface 204) to the security platform 402 for various computing devices, such as operator devices 462, end user devices 466, and/or administrator devices 464”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma using the teachings of Bodbyl to introduce a series of connected devices. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of monitoring a series of connected devices to identify an emergency to respond quickly. Therefore, it would have been obvious to combine the analogous arts Verma and Bodbyl to obtain the above-described limitations of claim 1.
However, the combination of Verma and Bodbyl does not explicitly teach, identifying, by the video conference provider, a second incident factor in a second multimedia stream; determining, by the video conference provider, a correlation between the first and second incident factors; determining, by the video conference provider, a location of an incident event based on the correlation; generating, by the video conference provider, the incident alert for an incident event based on determining a correlation between the incident factors; receiving, by the video conference provider from a client device, a request to view a recording corresponding to the first or second incident factors; and transmitting, by the video conference provider to the client device, a recording of the first or second multimedia stream corresponding to the requested first or second incident factor.
In another analogous field of endeavor, Zajac teaches, identifying, by the video conference provider, a second incident factor in a second multimedia stream; (Zajac, ¶0054: “monitoring computer had determined that the second video emergency call was reporting a different incident”) determining, by the video conference provider, a correlation between the first and second incident factors; (Zajac, ¶0053: “a comparison of details/objects of the video content of the second video emergency call compared to details/objects of the video content of the first video emergency call”) determining, by the video conference provider, a location of an incident event based on the correlation; (Zajac, ¶0027: “identifying a user of the communication device that originates the call, a location of the communication device that the call is from, a location of a cell tower that was used to transmit the call”) generating, by the video conference provider, the incident alert for an incident event (Zajac, ¶0018: “recorded video of an incident may be transmitted to the command center 110”) based on determining a correlation between the incident factors; (Zajac, ¶0053: “second or some subsequent call reporting the same first incident as the first video emergency call (as determined via the electronic computing device such as by the monitoring computer 205”) receiving, by the video conference provider from a client device, a request to view a recording corresponding to the first or second incident factors; (Zajac, ¶0033: “determine… preferred video emergency call for forwarding to the particular workstation 210, based at least on one or both of quality and field-of-view comparisons between the enqueued video emergency calls associated with the particular incident”) and transmitting, by the video conference provider to the client device, a recording of the first or second multimedia stream corresponding to the requested first or second incident factor. (Zajac, ¶0053: “first video emergency call may be marked in the queue as currently being provided to the call taker particularly assigned to the first incident (at workstation 210A as set forth above), while the second video emergency all may be marked as monitor-only”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl using the teachings of Zajac to introduce identifying an appropriate video steam to transmit to a responder. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of assisting the responder with the most relevant information during an emergency for a quicker and proper response. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl and Zajac to obtain the invention of claim 1.
Regarding claim 2, Verma in view of Bodbyl and in further view of Zajac teaches, The method of claim 1, wherein the incident identification system is established in response to receiving a request from an administrator (Bodbyl, ¶0082: “risk evaluation server 452 selecting a particular machine learning model… based on inputs from an administrator”; the selected machine learning model performs identification) of a facility where the incident event has occurred. (Bodbyl, ¶0134: “process 1500 begins at step 1502 with receiving an indication that a crime or other event has occurred at a premises”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac using the additional teachings of Bodbyl to introduce an administrator input. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of activating the system as needed by the administrator of the facility. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl and Zajac to obtain the invention of claim 2.
Regarding claim 6, Verma in view of Bodbyl and in further view of Zajac teaches, The method of claim 1, wherein the received multimedia streams comprise a first audio stream from a first client device, and monitoring, by the video conference provider, the received multimedia streams for one or more incident factors comprises: performing speech recognition on the first audio stream; (Verma, ¶0028: “microphone array with its own digital signal processor and it can understand voice recognition independent keywords, like-help, hurt, nurse, doctor, fire”) and identifying, based on the speech recognition, one or more keywords indicating the one or more incident factors are present. (Verma, ¶0012: “multiple MEMS microphones with digital signal processor with algorithms for beam forming and speaker independent keyword recognition such as fire, help, hurt, to intimate web server with alerts”).
Regarding claim 8, it recites a system with elements corresponding to the steps of the method recited in claim 1. Therefore, the recited elements of system claim 8 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 1. Additionally, the rationale and motivation to combine Verma, Bodbyl and Zajac presented in rejection of claim 1, apply to this claim. Verma additionally teaches, A system comprising: a non-transitory computer-readable medium; a communications interface; and a processor communicatively coupled to the non-transitory computer-readable medium and the communications interface, the processor configured to execute processor-executable instructions stored in the non-transitory computer-readable medium (Verma, ¶0012: “a microcomputer with at least one core CPU, and with at least one core graphic co-processor with RAM and flash memory to store data, execute all the embedded firmware, communicate with the web server”).
Regarding claim 15, it recites a non-transitory computer-readable medium including processor-executable instructions corresponding to the steps of the method recited in claim 1. Therefore, the recited instructions of the non-transitory computer-readable medium of claim 15 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 1. Additionally, the rationale and motivation to combine Verma, Bodbyl and Zajac presented in rejection of claim 1, apply to this claim. Verma additionally teaches, A non-transitory computer-readable medium comprising processor-executable instructions configured to cause one or more processors to (Verma, ¶0013: “a microcomputer with at least one core CPU and one core graphic co-processor and RAM and FLASH memory with embedded firmware”).
Regarding claim 16, Verma in view of Bodbyl and in further view of Zajac teaches, The non-transitory computer-readable medium of claim 15, wherein the instructions to monitor, by the video conference provider, the received multimedia streams for one or more incident factors further comprise processor-executable instructions stored in the non-transitory computer-readable medium to: analyze the received multimedia streams for the one or more incident factors, wherein the one or more incident factors comprise one or more of: one or more keywords; one or more audio signatures; an increase in audio activity; an increase in visual activity; or one or more visual signatures. (Verma, ¶0012: “algorithms for beam forming and speaker independent keyword recognition such as fire, help, hurt, to intimate web server with alerts”).
Regarding claim 19, Verma in view of Bodbyl and in further view of Zajac teaches, The non-transitory computer-readable medium of claim 15, wherein the received multimedia streams comprises a first audio stream from a first client device, and the instructions to identify, by the video conference provider, the incident factor in the received multimedia streams (Verma, ¶0028: “microphone array with its own digital signal processor and it can understand voice recognition independent keywords, like-help, hurt, nurse, doctor, fire”) further comprise processor-executable instructions stored in the non-transitory computer-readable medium to: perform speech recognition on the first audio stream; and identify, based on the speech recognition, one or more keywords indicating the one or more incident factors are present. (Verma, ¶0012: “multiple MEMS microphones with digital signal processor with algorithms for beam forming and speaker independent keyword recognition such as fire, help, hurt, to intimate web server with alerts”).
Claims 3, 4, 5, 13, 14, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Verma (US 2018/0253954 A1) in view of Bodbyl et al. (US 2021/0012115 A1), in further view of Zajac et al. (US 2024/0048655 A1) and still in further view of deCharms (US 2016/0192166 A1).
Regarding claim 3, Verma in view of Bodbyl and in further view of Zajac teaches, The method of claim 1. However, the combination of Verma, Bodbyl and Zajac does not explicitly teach, the method further comprising: responsive to the incident alert for the incident event, transmitting, by the video conference provider, a request to join an authorized agency device to the identification session; and joining, by the video conference provider, the authorized agency device to the identification session.
In an analogous field of endeavor, deCharms teaches, the method further comprising: responsive to the incident alert for the incident event, transmitting, by the video conference provider, a request to join an authorized agency device to the identification session; (deCharms, ¶0011: “transmitting real-time video from the mobile computing device to the other computing device; receiving a request to connect a mobile computing device with a responder service”) and joining, by the video conference provider, the authorized agency device to the identification session. (deCharms, ¶0146: “emergency responders or police officers carrying mobile devices running the software provided for here may receive real time alerts when an event has taken place near them, including mapped information of its location, photo, video, audio or other information collected about the event”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac using the teachings of deCharms to introduce transmitting information to authorized responders. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing authorized responders to view video feed and respond accordingly. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 3.
Regarding claim 4, Verma in view of Bodbyl in further view of Zajac and still in further view of deCharms teaches, The method of claim 3, the method further comprising: determining, by the video conference provider, a first client device (deCharms, ¶0203: “first user device 804”) and a second client device (deCharms, ¶0203: “second user device 806”) associated with the incident factor; (deCharms, ¶0203: “Reports may be sent automatically to other users, or to other users near to the site of the report”) determining, by the video conference provider, a first location of the first client device and a second location the second client device; (deCharms, ¶0203: “information may be provided to authorities, or to other users. This information may include their location (which may be determined automatically from their device, including by GPS or WiFi-based location”) and determining, by the video conference provider, an event location based on the first location and the second location. (deCharms, ¶0203: “As part of incident reporting and crime map generation… This information may include their location (which may be determined automatically from their device, including by GPS”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of deCharms using the additional teachings of deCharms to introduce monitoring multiple locations. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing authorized responders to locate an incident event and respond accordingly. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 4.
Regarding claim 5, Verma in view of Bodbyl in further view of deCharms teaches, The method of claim 4, the method further comprising: generating, by the video conference provider, a map of the event location; and transmitting, to the authorized agency device, the map of the event location. (deCharms, ¶0203: “crime map generation… information may be provided to authorities, or to other users. This information may include their location (which may be determined automatically from their device, including by GPS”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of deCharms using the additional teachings of deCharms to introduce generating a map. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing authorized responders to locate an incident event in real time using the map. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 5.
Regarding claim 13, it recites a system with elements corresponding to the steps of the method recited in claim 3. Therefore, the recited elements of system claim 13 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 3. Additionally, the rationale and motivation to combine Verma, Bodbyl, Zajac and deCharms presented in rejection of claim 3, apply to this claim.
Regarding claim 14, Verma in view of Bodbyl in further view of Zajac and still in further view of deCharms teaches, The system of claim 13, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: grant, by the video conference provider, host controls of the identification session to the authorized agency device; and transmit, by the video conference provider, (deCharms, ¶0177: “the user device 402 may auto-answer the contact request from the emergency responder 409 and/or permit access to control of features on the user's device 402”) a notification that the authorized agency device is the host of the identification session to the plurality of devices in the identification session. (deCharms, ¶0019: “Users can be notified of which other users and responders are available through user interface features, such as status indicators for a group of other users and responders”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of deCharms using the additional teachings of deCharms to introduce granting the control of a device to the authorized responders. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing the authorized responders to respond quicker. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 14.
Regarding claim 17, a non-transitory computer-readable medium including processor-executable instructions corresponding to the steps of the method recited in claim 3. Therefore, the recited instructions of the non-transitory computer-readable medium of claim 17 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 3. Additionally, the rationale and motivation to combine Verma, Bodbyl, Zajac and deCharms presented in rejection of claim 3, apply to this claim.
Regarding claim 18, Verma in view of Bodbyl in further view of Zajac and still in further view of deCharms teaches, The non-transitory computer-readable medium of claim 17, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: generate, by the video conference provider, a snippet of a multimedia stream comprising the one or more incident factors identified by the video conference provider; (Verma, ¶0035: “AI based algorithms to recognize client's face, emotions, movement, position… video is recorded in the SD memories while detected information is sent to the server (140) as alerts”) and transmit, to the authorized agency device, the snippet of the multimedia stream comprising the one or more incident factors. (deCharms, ¶0010: “transmit real-time video recorded by camera to another computing device associated with the particular candidate responder over a network connection”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of deCharms using the additional teachings of deCharms to introduce transmitting a video to an authorized responder. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing the authorized responder to quickly respond to the event. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 18.
Regarding claim 20, Verma in view of Bodbyl in further view of Zajac and still in further view of deCharms teaches, The non-transitory computer-readable medium of claim 17, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: receive, from the authorized agency device, a request to control equipment corresponding to a first multimedia stream transmitted from a first client device of each client device of the plurality of client devices; (deCharms, ¶0182: “The responder device 440 can transmit the request to the user device 402 (562), which can receive the request”) transmit, by the video conference provider, a signal to the first client device to control the equipment corresponding to the first multimedia stream; (deCharms, ¶0177: “the user device 402 may auto-answer the contact request from the emergency responder 409 and/or permit access to control of features on the user's device 402”) and receive, by the video conference provider, a modified first multimedia stream from the first client device (deCharms, ¶0153: “the responder, through the responder device 404, may remotely control the features of the users device 402, for example to take high resolution photos and have them sent, pan, focus, zoom, crop video/camera, capture audio, adjust volume”) based on the signal to control the equipment corresponding to the first multimedia stream. (deCharms, ¶0177: “the user device 402 may auto-answer the contact request from the emergency responder 409 and/or permit access to control of features on the user's device 402”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of deCharms using the additional teachings of deCharms to introduce granting control of a device to the authorized responders. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing the authorized responders to respond quickly. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and deCharms to obtain the invention of claim 20.
Claims 7, 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Verma (US 2018/0253954 A1) in view of Bodbyl et al. (US 2021/0012115 A1), in further view of Zajac et al. (US 2024/0048655 A1) and still in further view of Kim (US 2013/0038728 A1).
Regarding claim 7, Verma in view of Bodbyl and in further view of Zajac teaches, The method of claim 1, wherein the received multimedia streams comprise a first audio stream from a first client device (Verma, ¶0028: “understand voice recognition independent keywords, like-help, hurt, nurse, doctor, fire. And the web server (140) immediately triggers appropriate response”). However, the combination of Verma, Bodbyl and Zajac does not explicitly teach, the incident factors comprise an increase in audio activity in the first audio stream.
In an analogous field of endeavor, Kim teaches, the incident factors comprise an increase in audio activity in the first audio stream. (Kim, ¶0065: “a voice-detecting sensor (not shown) so that when the voice-detecting sensor detects a specific voice or a voice above a certain decibel (Db) level, it determines it as an emergency call signal and transmits the audio information”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac using the teachings of Kim to introduce detection of audio level. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of identifying an emergency situation using the detected level audio activity. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and Kim to obtain the invention of claim 7.
Regarding claim 9, Verma in view of Bodbyl and in further view of Zajac teaches, The system of claim 8, wherein: the instructions to receive the multimedia streams from each client device of the plurality of client devices further comprise processor-executable instructions stored in the non-transitory computer- readable medium to: (Verma, ¶0012: “a microcomputer with at least one core CPU, and with at least one core graphic co-processor with RAM and flash memory to store data, execute all the embedded firmware, communicate with the web server”) receive, by the video conference provider, a first plurality of multimedia streams (Verma, ¶0030: “streams video as well as GPS location information of the client to the drone pilot operator”) from a first plurality of client devices, wherein the one or more client devices comprise the first plurality of client devices; (Bodbyl, ¶0061: “application server may provide access to an interface (e.g., similar to interface 204) to the security platform 402 for various computing devices, such as operator devices 462, end user devices 466, and/or administrator devices 464”) and the instructions to identify, the video conference provider, the incident factor in the received multimedia streams (Verma, ¶0009: “monitor client 24/7, and to compare the video with pre stored images for hazards like fire”; incident factor is interpreted as a fire in the monitored scene).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac using the additional teachings of Bodbyl to introduce a plurality of client devices. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of monitoring a series of devices to identify information about an emergency for a faster response. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl and Zajac to obtain the above-described limitations of claim 9. However, the combination of Verma, Bodbyl and Zajac does not explicitly teach, determine, by the video conference provider, an increase in activity in the first plurality of multimedia streams; and determine, by the video conference provider, the incident factor based on the increase in activity in the first plurality of multimedia streams.
In an analogous field of endeavor, Kim teaches, determine, by the video conference provider, an increase in activity in the first plurality of multimedia streams; and determine, by the video conference provider, the incident factor based on the increase in activity in the first plurality of multimedia streams. (Kim, ¶0065: “a voice-detecting sensor (not shown) so that when the voice-detecting sensor detects a specific voice or a voice above a certain decibel (Db) level, it determines it as an emergency call signal and transmits the audio information”; activity is interpreted as audio activity).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac using the teachings of Kim to introduce detection of increased audio level. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of identifying an emergency situation using the detected high level of audio. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac and Kim to obtain the invention of claim 9.
Regarding claim 10, Verma in view of Bodbyl, in further view of Zajac and still in further view of Kim teaches, The system of claim 9, wherein the increase in activity in the first plurality of multimedia streams comprises one or more of: an increase in audio activity; an increase in visual activity; or an increase in chat messaging activity. (Kim, ¶0065: “the voice-detecting sensor detects a specific voice or a voice above a certain decibel (Db) level”). The proposed combination as well as the motivation for combining Verma, Bodbyl, Zajac and Kim references presented in the rejection of claim 9, apply to claim 10 and are incorporated herein by reference. Thus, the system recited in claim 10 is met by Verma, Bodbyl, Zajac and Kim.
Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Verma (US 2018/0253954 A1), in view of Bodbyl et al. (US 2021/0012115 A1), in further view of Zajac et al. (US 2024/0048655 A1), still in further view of Kim (US 2013/0038728 A1) and yet in further view of deCharms (US 2016/0192166 A1).
Regarding claim 11, Verma in view of Bodbyl, in further view of Zajac and still in further view of Kim teaches, The system of claim 9, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to. However, the combination of Verma, Bodbyl, Zajac and Kim does not explicitly teach generate, by the video conference provider, a map of the increase in activity.
In an analogous field of endeavor, deCharms teaches, generate, by the video conference provider, a map of the increase in activity. (deCharms, ¶0146: “emergency responders or police officers carrying mobile devices running the software provided for here may receive real time alerts when an event has taken place near them, including mapped information of its location, photo, video, audio or other information collected about the event.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl and in further view of Zajac and still in further view of Kim using the teachings of deCharms to introduce a map. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of allowing authorized responders to track the location of the incident event for a quicker response. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac, Kim and deCharms to obtain the invention of claim 11.
Regarding claim 12, Verma in view of Bodbyl, in further view of Zajac, still in futher view of Kim and yet in further view of deCharms teaches, The system of claim 11, wherein the instructions to generate, by the video conference provider, the map of the increase in activity further comprise processor-executable instructions stored in the non-transitory computer-readable medium to: determine, by the video conference provider, a location for each of the client devices (Verma, ¶0030: “streams video as well as GPS location information of the client to the drone pilot operator”) in the first plurality of client devices; (Verma, ¶0011: “a server which handles secure communication with all the connected devices”) determine, by the video conference provider, an activity level for a respective multimedia stream of the first plurality of multimedia streams for each of the client devices; (Kim, ¶0066: “loud voice ranges from 80 dB to 110 dB, and thus when the voice-detecting sensor detects the voice, the conversion control unit 160 determines the detected voice as an emergency call signal”) and generate, by the video conference provider, a heat map based on the activity level for each of the respective multimedia streams. (Bodbyl, ¶0114: “a heat map can be generated based on color coding to forecast risk levels for the premises. The heat map can be animated to illustrate dynamically changing risk levels over some time horizon”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Verma in view of Bodbyl, in further view of Zajac and still in further view of Kim and yet in further view of deCharms using the additional teachings of Bodbyl to introduce a heat map. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of locating the incident based on the level of activity/risk/incident factor. Therefore, it would have been obvious to combine the analogous arts Verma, Bodbyl, Zajac, Kim and deCharms to obtain the invention of claim 12.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662